00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1037 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3699 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.112 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.153 Fetching changes from the remote Git repository 00:00:00.156 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.214 Using shallow fetch with depth 1 00:00:00.214 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.214 > git --version # timeout=10 00:00:00.259 > git --version # 'git version 2.39.2' 00:00:00.259 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.516 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.528 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.539 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.539 > git config core.sparsecheckout # timeout=10 00:00:07.550 > git read-tree -mu HEAD # timeout=10 00:00:07.564 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.585 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.585 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.685 [Pipeline] Start of Pipeline 00:00:07.695 [Pipeline] library 00:00:07.697 Loading library shm_lib@master 00:00:07.697 Library shm_lib@master is cached. Copying from home. 00:00:07.710 [Pipeline] node 00:00:07.725 Running on WFP37 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.726 [Pipeline] { 00:00:07.733 [Pipeline] catchError 00:00:07.734 [Pipeline] { 00:00:07.746 [Pipeline] wrap 00:00:07.755 [Pipeline] { 00:00:07.762 [Pipeline] stage 00:00:07.764 [Pipeline] { (Prologue) 00:00:07.962 [Pipeline] sh 00:00:08.256 + logger -p user.info -t JENKINS-CI 00:00:08.274 [Pipeline] echo 00:00:08.276 Node: WFP37 00:00:08.284 [Pipeline] sh 00:00:08.585 [Pipeline] setCustomBuildProperty 00:00:08.598 [Pipeline] echo 00:00:08.600 Cleanup processes 00:00:08.605 [Pipeline] sh 00:00:08.890 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.890 1445577 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.904 [Pipeline] sh 00:00:09.190 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.190 ++ grep -v 'sudo pgrep' 00:00:09.190 ++ awk '{print $1}' 00:00:09.190 + sudo kill -9 00:00:09.190 + true 00:00:09.207 [Pipeline] cleanWs 00:00:09.217 [WS-CLEANUP] Deleting project workspace... 00:00:09.217 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.224 [WS-CLEANUP] done 00:00:09.228 [Pipeline] setCustomBuildProperty 00:00:09.246 [Pipeline] sh 00:00:09.529 + sudo git config --global --replace-all safe.directory '*' 00:00:09.619 [Pipeline] httpRequest 00:00:09.941 [Pipeline] echo 00:00:09.942 Sorcerer 10.211.164.20 is alive 00:00:09.950 [Pipeline] retry 00:00:09.951 [Pipeline] { 00:00:09.963 [Pipeline] httpRequest 00:00:09.967 HttpMethod: GET 00:00:09.968 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.970 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.995 Response Code: HTTP/1.1 200 OK 00:00:09.996 Success: Status code 200 is in the accepted range: 200,404 00:00:09.996 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.262 [Pipeline] } 00:00:31.279 [Pipeline] // retry 00:00:31.287 [Pipeline] sh 00:00:31.583 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.600 [Pipeline] httpRequest 00:00:31.982 [Pipeline] echo 00:00:31.983 Sorcerer 10.211.164.20 is alive 00:00:31.990 [Pipeline] retry 00:00:31.991 [Pipeline] { 00:00:32.002 [Pipeline] httpRequest 00:00:32.006 HttpMethod: GET 00:00:32.006 URL: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:32.007 Sending request to url: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:32.014 Response Code: HTTP/1.1 200 OK 00:00:32.014 Success: Status code 200 is in the accepted range: 200,404 00:00:32.014 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:06:06.087 [Pipeline] } 00:06:06.103 [Pipeline] // retry 00:06:06.110 [Pipeline] sh 00:06:06.396 + tar --no-same-owner -xf spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:06:08.944 [Pipeline] sh 00:06:09.233 + git -C spdk log --oneline -n5 00:06:09.233 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:06:09.233 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:06:09.233 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:06:09.233 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:06:09.233 e56f1618f lib/ftl: Add explicit support for write unit sizes of base device 00:06:09.251 [Pipeline] withCredentials 00:06:09.261 > git --version # timeout=10 00:06:09.274 > git --version # 'git version 2.39.2' 00:06:09.291 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:06:09.294 [Pipeline] { 00:06:09.303 [Pipeline] retry 00:06:09.305 [Pipeline] { 00:06:09.323 [Pipeline] sh 00:06:09.609 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:06:10.593 [Pipeline] } 00:06:10.613 [Pipeline] // retry 00:06:10.619 [Pipeline] } 00:06:10.636 [Pipeline] // withCredentials 00:06:10.645 [Pipeline] httpRequest 00:06:11.332 [Pipeline] echo 00:06:11.334 Sorcerer 10.211.164.20 is alive 00:06:11.341 [Pipeline] retry 00:06:11.343 [Pipeline] { 00:06:11.350 [Pipeline] httpRequest 00:06:11.354 HttpMethod: GET 00:06:11.354 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:06:11.355 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:06:11.365 Response Code: HTTP/1.1 200 OK 00:06:11.366 Success: Status code 200 is in the accepted range: 200,404 00:06:11.366 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:06:53.083 [Pipeline] } 00:06:53.102 [Pipeline] // retry 00:06:53.109 [Pipeline] sh 00:06:53.392 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:06:54.779 [Pipeline] sh 00:06:55.066 + git -C dpdk log --oneline -n5 00:06:55.066 eeb0605f11 version: 23.11.0 00:06:55.066 238778122a doc: update release notes for 23.11 00:06:55.066 46aa6b3cfc doc: fix description of RSS features 00:06:55.066 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:06:55.066 7e421ae345 devtools: support skipping forbid rule check 00:06:55.074 [Pipeline] } 00:06:55.088 [Pipeline] // stage 00:06:55.096 [Pipeline] stage 00:06:55.098 [Pipeline] { (Prepare) 00:06:55.115 [Pipeline] writeFile 00:06:55.130 [Pipeline] sh 00:06:55.413 + logger -p user.info -t JENKINS-CI 00:06:55.424 [Pipeline] sh 00:06:55.707 + logger -p user.info -t JENKINS-CI 00:06:55.717 [Pipeline] sh 00:06:56.000 + cat autorun-spdk.conf 00:06:56.000 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:56.000 SPDK_TEST_NVMF=1 00:06:56.000 SPDK_TEST_NVME_CLI=1 00:06:56.000 SPDK_TEST_NVMF_NICS=mlx5 00:06:56.000 SPDK_RUN_UBSAN=1 00:06:56.000 NET_TYPE=phy 00:06:56.000 SPDK_TEST_NATIVE_DPDK=v23.11 00:06:56.000 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:06:56.007 RUN_NIGHTLY=1 00:06:56.011 [Pipeline] readFile 00:06:56.031 [Pipeline] withEnv 00:06:56.033 [Pipeline] { 00:06:56.043 [Pipeline] sh 00:06:56.324 + set -ex 00:06:56.324 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:06:56.324 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:06:56.324 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:56.324 ++ SPDK_TEST_NVMF=1 00:06:56.324 ++ SPDK_TEST_NVME_CLI=1 00:06:56.324 ++ SPDK_TEST_NVMF_NICS=mlx5 00:06:56.324 ++ SPDK_RUN_UBSAN=1 00:06:56.324 ++ NET_TYPE=phy 00:06:56.324 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:06:56.324 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:06:56.324 ++ RUN_NIGHTLY=1 00:06:56.324 + case $SPDK_TEST_NVMF_NICS in 00:06:56.324 + DRIVERS=mlx5_ib 00:06:56.324 + [[ -n mlx5_ib ]] 00:06:56.324 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:06:56.324 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:07:02.889 rmmod: ERROR: Module irdma is not currently loaded 00:07:02.889 rmmod: ERROR: Module i40iw is not currently loaded 00:07:02.889 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:07:02.889 + true 00:07:02.889 + for D in $DRIVERS 00:07:02.889 + sudo modprobe mlx5_ib 00:07:02.889 + exit 0 00:07:02.897 [Pipeline] } 00:07:02.911 [Pipeline] // withEnv 00:07:02.916 [Pipeline] } 00:07:02.928 [Pipeline] // stage 00:07:02.935 [Pipeline] catchError 00:07:02.936 [Pipeline] { 00:07:02.950 [Pipeline] timeout 00:07:02.951 Timeout set to expire in 1 hr 0 min 00:07:02.952 [Pipeline] { 00:07:02.968 [Pipeline] stage 00:07:02.970 [Pipeline] { (Tests) 00:07:02.984 [Pipeline] sh 00:07:03.294 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:07:03.294 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:07:03.294 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:07:03.294 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:07:03.294 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:03.294 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:07:03.294 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:07:03.294 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:07:03.294 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:07:03.294 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:07:03.294 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:07:03.294 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:07:03.294 + source /etc/os-release 00:07:03.294 ++ NAME='Fedora Linux' 00:07:03.294 ++ VERSION='39 (Cloud Edition)' 00:07:03.294 ++ ID=fedora 00:07:03.294 ++ VERSION_ID=39 00:07:03.294 ++ VERSION_CODENAME= 00:07:03.294 ++ PLATFORM_ID=platform:f39 00:07:03.294 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:03.294 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:03.294 ++ LOGO=fedora-logo-icon 00:07:03.294 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:03.294 ++ HOME_URL=https://fedoraproject.org/ 00:07:03.294 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:03.294 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:03.294 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:03.294 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:03.294 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:03.294 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:03.294 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:03.294 ++ SUPPORT_END=2024-11-12 00:07:03.294 ++ VARIANT='Cloud Edition' 00:07:03.294 ++ VARIANT_ID=cloud 00:07:03.294 + uname -a 00:07:03.294 Linux spdk-wfp-37 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:03.294 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:07:05.830 Hugepages 00:07:05.830 node hugesize free / total 00:07:05.830 node0 1048576kB 0 / 0 00:07:05.830 node0 2048kB 0 / 0 00:07:05.830 node1 1048576kB 0 / 0 00:07:05.830 node1 2048kB 0 / 0 00:07:05.830 00:07:05.830 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:05.830 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:05.830 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:05.830 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:05.830 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:05.830 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:05.830 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:05.831 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:05.831 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:05.831 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:05.831 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:05.831 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:05.831 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:05.831 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:05.831 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:05.831 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:05.831 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:05.831 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:05.831 + rm -f /tmp/spdk-ld-path 00:07:05.831 + source autorun-spdk.conf 00:07:05.831 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:05.831 ++ SPDK_TEST_NVMF=1 00:07:05.831 ++ SPDK_TEST_NVME_CLI=1 00:07:05.831 ++ SPDK_TEST_NVMF_NICS=mlx5 00:07:05.831 ++ SPDK_RUN_UBSAN=1 00:07:05.831 ++ NET_TYPE=phy 00:07:05.831 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:07:05.831 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:05.831 ++ RUN_NIGHTLY=1 00:07:05.831 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:05.831 + [[ -n '' ]] 00:07:05.831 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:05.831 + for M in /var/spdk/build-*-manifest.txt 00:07:05.831 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:05.831 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:07:05.831 + for M in /var/spdk/build-*-manifest.txt 00:07:05.831 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:05.831 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:07:05.831 + for M in /var/spdk/build-*-manifest.txt 00:07:05.831 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:05.831 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:07:05.831 ++ uname 00:07:05.831 + [[ Linux == \L\i\n\u\x ]] 00:07:05.831 + sudo dmesg -T 00:07:06.091 + sudo dmesg --clear 00:07:06.091 + dmesg_pid=1447855 00:07:06.091 + [[ Fedora Linux == FreeBSD ]] 00:07:06.091 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:06.091 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:06.091 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:06.091 + [[ -x /usr/src/fio-static/fio ]] 00:07:06.091 + export FIO_BIN=/usr/src/fio-static/fio 00:07:06.091 + FIO_BIN=/usr/src/fio-static/fio 00:07:06.091 + sudo dmesg -Tw 00:07:06.091 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:06.091 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:06.091 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:06.091 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:06.091 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:06.091 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:06.091 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:06.091 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:06.091 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:07:06.091 13:38:05 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:06.091 13:38:05 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:06.091 13:38:05 -- nvmf-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:07:06.091 13:38:05 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:06.091 13:38:05 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:07:06.091 13:38:05 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:06.091 13:38:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:06.091 13:38:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:06.091 13:38:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:06.091 13:38:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.091 13:38:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.091 13:38:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.091 13:38:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.091 13:38:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.091 13:38:05 -- paths/export.sh@5 -- $ export PATH 00:07:06.091 13:38:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.091 13:38:05 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:07:06.091 13:38:05 -- common/autobuild_common.sh@493 -- $ date +%s 00:07:06.091 13:38:05 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733402285.XXXXXX 00:07:06.091 13:38:05 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733402285.EdhSGK 00:07:06.091 13:38:05 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:07:06.091 13:38:05 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:07:06.091 13:38:05 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:06.091 13:38:05 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:07:06.091 13:38:05 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:06.091 13:38:05 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:06.091 13:38:05 -- common/autobuild_common.sh@509 -- $ get_config_params 00:07:06.091 13:38:05 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:07:06.091 13:38:05 -- common/autotest_common.sh@10 -- $ set +x 00:07:06.091 13:38:05 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:07:06.091 13:38:05 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:07:06.091 13:38:05 -- pm/common@17 -- $ local monitor 00:07:06.091 13:38:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.091 13:38:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.091 13:38:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.091 13:38:05 -- pm/common@21 -- $ date +%s 00:07:06.091 13:38:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:06.091 13:38:05 -- pm/common@21 -- $ date +%s 00:07:06.091 13:38:05 -- pm/common@25 -- $ sleep 1 00:07:06.091 13:38:05 -- pm/common@21 -- $ date +%s 00:07:06.091 13:38:05 -- pm/common@21 -- $ date +%s 00:07:06.091 13:38:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402285 00:07:06.091 13:38:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402285 00:07:06.091 13:38:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402285 00:07:06.091 13:38:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402285 00:07:06.350 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402285_collect-cpu-load.pm.log 00:07:06.350 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402285_collect-vmstat.pm.log 00:07:06.350 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402285_collect-cpu-temp.pm.log 00:07:06.350 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402285_collect-bmc-pm.bmc.pm.log 00:07:07.307 13:38:06 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:07.307 13:38:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:07.307 13:38:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:07.307 13:38:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:07.307 13:38:06 -- spdk/autobuild.sh@16 -- $ date -u 00:07:07.307 Thu Dec 5 12:38:06 PM UTC 2024 00:07:07.307 13:38:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:07.307 v25.01-pre-296-g8d3947977 00:07:07.307 13:38:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:07.307 13:38:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:07.307 13:38:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:07.307 13:38:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:07.307 13:38:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:07.307 13:38:06 -- common/autotest_common.sh@10 -- $ set +x 00:07:07.307 ************************************ 00:07:07.307 START TEST ubsan 00:07:07.307 ************************************ 00:07:07.307 13:38:06 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:07.307 using ubsan 00:07:07.307 00:07:07.307 real 0m0.000s 00:07:07.307 user 0m0.000s 00:07:07.307 sys 0m0.000s 00:07:07.307 13:38:06 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:07.307 13:38:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:07.307 ************************************ 00:07:07.307 END TEST ubsan 00:07:07.307 ************************************ 00:07:07.307 13:38:07 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:07:07.307 13:38:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:07:07.307 13:38:07 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:07:07.307 13:38:07 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:07:07.307 13:38:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:07.307 13:38:07 -- common/autotest_common.sh@10 -- $ set +x 00:07:07.307 ************************************ 00:07:07.307 START TEST build_native_dpdk 00:07:07.307 ************************************ 00:07:07.307 13:38:07 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:07:07.307 eeb0605f11 version: 23.11.0 00:07:07.307 238778122a doc: update release notes for 23.11 00:07:07.307 46aa6b3cfc doc: fix description of RSS features 00:07:07.307 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:07:07.307 7e421ae345 devtools: support skipping forbid rule check 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:07:07.307 13:38:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:07:07.308 patching file config/rte_config.h 00:07:07.308 Hunk #1 succeeded at 60 (offset 1 line). 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:07:07.308 patching file lib/pcapng/rte_pcapng.c 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:07:07.308 13:38:07 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:07:07.308 13:38:07 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:07:12.579 The Meson build system 00:07:12.579 Version: 1.5.0 00:07:12.579 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:07:12.579 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:07:12.579 Build type: native build 00:07:12.579 Program cat found: YES (/usr/bin/cat) 00:07:12.579 Project name: DPDK 00:07:12.579 Project version: 23.11.0 00:07:12.579 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:12.579 C linker for the host machine: gcc ld.bfd 2.40-14 00:07:12.579 Host machine cpu family: x86_64 00:07:12.579 Host machine cpu: x86_64 00:07:12.579 Message: ## Building in Developer Mode ## 00:07:12.579 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:12.579 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:07:12.579 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:07:12.579 Program python3 found: YES (/usr/bin/python3) 00:07:12.579 Program cat found: YES (/usr/bin/cat) 00:07:12.579 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:07:12.580 Compiler for C supports arguments -march=native: YES 00:07:12.580 Checking for size of "void *" : 8 00:07:12.580 Checking for size of "void *" : 8 (cached) 00:07:12.580 Library m found: YES 00:07:12.580 Library numa found: YES 00:07:12.580 Has header "numaif.h" : YES 00:07:12.580 Library fdt found: NO 00:07:12.580 Library execinfo found: NO 00:07:12.580 Has header "execinfo.h" : YES 00:07:12.580 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:12.580 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:12.580 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:12.580 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:12.580 Run-time dependency openssl found: YES 3.1.1 00:07:12.580 Run-time dependency libpcap found: YES 1.10.4 00:07:12.580 Has header "pcap.h" with dependency libpcap: YES 00:07:12.580 Compiler for C supports arguments -Wcast-qual: YES 00:07:12.580 Compiler for C supports arguments -Wdeprecated: YES 00:07:12.580 Compiler for C supports arguments -Wformat: YES 00:07:12.580 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:12.580 Compiler for C supports arguments -Wformat-security: NO 00:07:12.580 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:12.580 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:12.580 Compiler for C supports arguments -Wnested-externs: YES 00:07:12.580 Compiler for C supports arguments -Wold-style-definition: YES 00:07:12.580 Compiler for C supports arguments -Wpointer-arith: YES 00:07:12.580 Compiler for C supports arguments -Wsign-compare: YES 00:07:12.580 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:12.580 Compiler for C supports arguments -Wundef: YES 00:07:12.580 Compiler for C supports arguments -Wwrite-strings: YES 00:07:12.580 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:12.580 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:12.580 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:12.580 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:12.580 Program objdump found: YES (/usr/bin/objdump) 00:07:12.580 Compiler for C supports arguments -mavx512f: YES 00:07:12.580 Checking if "AVX512 checking" compiles: YES 00:07:12.580 Fetching value of define "__SSE4_2__" : 1 00:07:12.580 Fetching value of define "__AES__" : 1 00:07:12.580 Fetching value of define "__AVX__" : 1 00:07:12.580 Fetching value of define "__AVX2__" : 1 00:07:12.580 Fetching value of define "__AVX512BW__" : 1 00:07:12.580 Fetching value of define "__AVX512CD__" : 1 00:07:12.580 Fetching value of define "__AVX512DQ__" : 1 00:07:12.580 Fetching value of define "__AVX512F__" : 1 00:07:12.580 Fetching value of define "__AVX512VL__" : 1 00:07:12.580 Fetching value of define "__PCLMUL__" : 1 00:07:12.580 Fetching value of define "__RDRND__" : 1 00:07:12.580 Fetching value of define "__RDSEED__" : 1 00:07:12.580 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:12.580 Fetching value of define "__znver1__" : (undefined) 00:07:12.580 Fetching value of define "__znver2__" : (undefined) 00:07:12.580 Fetching value of define "__znver3__" : (undefined) 00:07:12.580 Fetching value of define "__znver4__" : (undefined) 00:07:12.580 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:12.580 Message: lib/log: Defining dependency "log" 00:07:12.580 Message: lib/kvargs: Defining dependency "kvargs" 00:07:12.580 Message: lib/telemetry: Defining dependency "telemetry" 00:07:12.580 Checking for function "getentropy" : NO 00:07:12.580 Message: lib/eal: Defining dependency "eal" 00:07:12.580 Message: lib/ring: Defining dependency "ring" 00:07:12.580 Message: lib/rcu: Defining dependency "rcu" 00:07:12.580 Message: lib/mempool: Defining dependency "mempool" 00:07:12.580 Message: lib/mbuf: Defining dependency "mbuf" 00:07:12.580 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:12.580 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:12.580 Compiler for C supports arguments -mpclmul: YES 00:07:12.580 Compiler for C supports arguments -maes: YES 00:07:12.580 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:12.580 Compiler for C supports arguments -mavx512bw: YES 00:07:12.580 Compiler for C supports arguments -mavx512dq: YES 00:07:12.580 Compiler for C supports arguments -mavx512vl: YES 00:07:12.580 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:12.580 Compiler for C supports arguments -mavx2: YES 00:07:12.580 Compiler for C supports arguments -mavx: YES 00:07:12.580 Message: lib/net: Defining dependency "net" 00:07:12.580 Message: lib/meter: Defining dependency "meter" 00:07:12.580 Message: lib/ethdev: Defining dependency "ethdev" 00:07:12.580 Message: lib/pci: Defining dependency "pci" 00:07:12.580 Message: lib/cmdline: Defining dependency "cmdline" 00:07:12.580 Message: lib/metrics: Defining dependency "metrics" 00:07:12.580 Message: lib/hash: Defining dependency "hash" 00:07:12.580 Message: lib/timer: Defining dependency "timer" 00:07:12.580 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512CD__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:12.580 Message: lib/acl: Defining dependency "acl" 00:07:12.580 Message: lib/bbdev: Defining dependency "bbdev" 00:07:12.580 Message: lib/bitratestats: Defining dependency "bitratestats" 00:07:12.580 Run-time dependency libelf found: YES 0.191 00:07:12.580 Message: lib/bpf: Defining dependency "bpf" 00:07:12.580 Message: lib/cfgfile: Defining dependency "cfgfile" 00:07:12.580 Message: lib/compressdev: Defining dependency "compressdev" 00:07:12.580 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:12.580 Message: lib/distributor: Defining dependency "distributor" 00:07:12.580 Message: lib/dmadev: Defining dependency "dmadev" 00:07:12.580 Message: lib/efd: Defining dependency "efd" 00:07:12.580 Message: lib/eventdev: Defining dependency "eventdev" 00:07:12.580 Message: lib/dispatcher: Defining dependency "dispatcher" 00:07:12.580 Message: lib/gpudev: Defining dependency "gpudev" 00:07:12.580 Message: lib/gro: Defining dependency "gro" 00:07:12.580 Message: lib/gso: Defining dependency "gso" 00:07:12.580 Message: lib/ip_frag: Defining dependency "ip_frag" 00:07:12.580 Message: lib/jobstats: Defining dependency "jobstats" 00:07:12.580 Message: lib/latencystats: Defining dependency "latencystats" 00:07:12.580 Message: lib/lpm: Defining dependency "lpm" 00:07:12.580 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512IFMA__" : (undefined) 00:07:12.580 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:07:12.580 Message: lib/member: Defining dependency "member" 00:07:12.580 Message: lib/pcapng: Defining dependency "pcapng" 00:07:12.580 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:12.580 Message: lib/power: Defining dependency "power" 00:07:12.580 Message: lib/rawdev: Defining dependency "rawdev" 00:07:12.580 Message: lib/regexdev: Defining dependency "regexdev" 00:07:12.580 Message: lib/mldev: Defining dependency "mldev" 00:07:12.580 Message: lib/rib: Defining dependency "rib" 00:07:12.580 Message: lib/reorder: Defining dependency "reorder" 00:07:12.580 Message: lib/sched: Defining dependency "sched" 00:07:12.580 Message: lib/security: Defining dependency "security" 00:07:12.580 Message: lib/stack: Defining dependency "stack" 00:07:12.580 Has header "linux/userfaultfd.h" : YES 00:07:12.580 Has header "linux/vduse.h" : YES 00:07:12.580 Message: lib/vhost: Defining dependency "vhost" 00:07:12.580 Message: lib/ipsec: Defining dependency "ipsec" 00:07:12.580 Message: lib/pdcp: Defining dependency "pdcp" 00:07:12.580 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:12.580 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:12.580 Message: lib/fib: Defining dependency "fib" 00:07:12.580 Message: lib/port: Defining dependency "port" 00:07:12.580 Message: lib/pdump: Defining dependency "pdump" 00:07:12.580 Message: lib/table: Defining dependency "table" 00:07:12.580 Message: lib/pipeline: Defining dependency "pipeline" 00:07:12.580 Message: lib/graph: Defining dependency "graph" 00:07:12.580 Message: lib/node: Defining dependency "node" 00:07:12.580 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:13.148 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:13.148 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:13.148 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:13.148 Compiler for C supports arguments -Wno-sign-compare: YES 00:07:13.148 Compiler for C supports arguments -Wno-unused-value: YES 00:07:13.148 Compiler for C supports arguments -Wno-format: YES 00:07:13.148 Compiler for C supports arguments -Wno-format-security: YES 00:07:13.148 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:07:13.148 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:07:13.148 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:07:13.148 Compiler for C supports arguments -Wno-unused-parameter: YES 00:07:13.148 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:13.148 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:13.148 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:13.148 Compiler for C supports arguments -mavx512bw: YES (cached) 00:07:13.148 Compiler for C supports arguments -march=skylake-avx512: YES 00:07:13.148 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:07:13.148 Has header "sys/epoll.h" : YES 00:07:13.148 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:13.148 Configuring doxy-api-html.conf using configuration 00:07:13.148 Configuring doxy-api-man.conf using configuration 00:07:13.148 Program mandb found: YES (/usr/bin/mandb) 00:07:13.148 Program sphinx-build found: NO 00:07:13.148 Configuring rte_build_config.h using configuration 00:07:13.148 Message: 00:07:13.149 ================= 00:07:13.149 Applications Enabled 00:07:13.149 ================= 00:07:13.149 00:07:13.149 apps: 00:07:13.149 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:07:13.149 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:07:13.149 test-pmd, test-regex, test-sad, test-security-perf, 00:07:13.149 00:07:13.149 Message: 00:07:13.149 ================= 00:07:13.149 Libraries Enabled 00:07:13.149 ================= 00:07:13.149 00:07:13.149 libs: 00:07:13.149 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:13.149 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:07:13.149 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:07:13.149 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:07:13.149 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:07:13.149 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:07:13.149 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:07:13.149 00:07:13.149 00:07:13.149 Message: 00:07:13.149 =============== 00:07:13.149 Drivers Enabled 00:07:13.149 =============== 00:07:13.149 00:07:13.149 common: 00:07:13.149 00:07:13.149 bus: 00:07:13.149 pci, vdev, 00:07:13.149 mempool: 00:07:13.149 ring, 00:07:13.149 dma: 00:07:13.149 00:07:13.149 net: 00:07:13.149 i40e, 00:07:13.149 raw: 00:07:13.149 00:07:13.149 crypto: 00:07:13.149 00:07:13.149 compress: 00:07:13.149 00:07:13.149 regex: 00:07:13.149 00:07:13.149 ml: 00:07:13.149 00:07:13.149 vdpa: 00:07:13.149 00:07:13.149 event: 00:07:13.149 00:07:13.149 baseband: 00:07:13.149 00:07:13.149 gpu: 00:07:13.149 00:07:13.149 00:07:13.149 Message: 00:07:13.149 ================= 00:07:13.149 Content Skipped 00:07:13.149 ================= 00:07:13.149 00:07:13.149 apps: 00:07:13.149 00:07:13.149 libs: 00:07:13.149 00:07:13.149 drivers: 00:07:13.149 common/cpt: not in enabled drivers build config 00:07:13.149 common/dpaax: not in enabled drivers build config 00:07:13.149 common/iavf: not in enabled drivers build config 00:07:13.149 common/idpf: not in enabled drivers build config 00:07:13.149 common/mvep: not in enabled drivers build config 00:07:13.149 common/octeontx: not in enabled drivers build config 00:07:13.149 bus/auxiliary: not in enabled drivers build config 00:07:13.149 bus/cdx: not in enabled drivers build config 00:07:13.149 bus/dpaa: not in enabled drivers build config 00:07:13.149 bus/fslmc: not in enabled drivers build config 00:07:13.149 bus/ifpga: not in enabled drivers build config 00:07:13.149 bus/platform: not in enabled drivers build config 00:07:13.149 bus/vmbus: not in enabled drivers build config 00:07:13.149 common/cnxk: not in enabled drivers build config 00:07:13.149 common/mlx5: not in enabled drivers build config 00:07:13.149 common/nfp: not in enabled drivers build config 00:07:13.149 common/qat: not in enabled drivers build config 00:07:13.149 common/sfc_efx: not in enabled drivers build config 00:07:13.149 mempool/bucket: not in enabled drivers build config 00:07:13.149 mempool/cnxk: not in enabled drivers build config 00:07:13.149 mempool/dpaa: not in enabled drivers build config 00:07:13.149 mempool/dpaa2: not in enabled drivers build config 00:07:13.149 mempool/octeontx: not in enabled drivers build config 00:07:13.149 mempool/stack: not in enabled drivers build config 00:07:13.149 dma/cnxk: not in enabled drivers build config 00:07:13.149 dma/dpaa: not in enabled drivers build config 00:07:13.149 dma/dpaa2: not in enabled drivers build config 00:07:13.149 dma/hisilicon: not in enabled drivers build config 00:07:13.149 dma/idxd: not in enabled drivers build config 00:07:13.149 dma/ioat: not in enabled drivers build config 00:07:13.149 dma/skeleton: not in enabled drivers build config 00:07:13.149 net/af_packet: not in enabled drivers build config 00:07:13.149 net/af_xdp: not in enabled drivers build config 00:07:13.149 net/ark: not in enabled drivers build config 00:07:13.149 net/atlantic: not in enabled drivers build config 00:07:13.149 net/avp: not in enabled drivers build config 00:07:13.149 net/axgbe: not in enabled drivers build config 00:07:13.149 net/bnx2x: not in enabled drivers build config 00:07:13.149 net/bnxt: not in enabled drivers build config 00:07:13.149 net/bonding: not in enabled drivers build config 00:07:13.149 net/cnxk: not in enabled drivers build config 00:07:13.149 net/cpfl: not in enabled drivers build config 00:07:13.149 net/cxgbe: not in enabled drivers build config 00:07:13.149 net/dpaa: not in enabled drivers build config 00:07:13.149 net/dpaa2: not in enabled drivers build config 00:07:13.149 net/e1000: not in enabled drivers build config 00:07:13.149 net/ena: not in enabled drivers build config 00:07:13.149 net/enetc: not in enabled drivers build config 00:07:13.149 net/enetfec: not in enabled drivers build config 00:07:13.149 net/enic: not in enabled drivers build config 00:07:13.149 net/failsafe: not in enabled drivers build config 00:07:13.149 net/fm10k: not in enabled drivers build config 00:07:13.149 net/gve: not in enabled drivers build config 00:07:13.149 net/hinic: not in enabled drivers build config 00:07:13.149 net/hns3: not in enabled drivers build config 00:07:13.149 net/iavf: not in enabled drivers build config 00:07:13.149 net/ice: not in enabled drivers build config 00:07:13.149 net/idpf: not in enabled drivers build config 00:07:13.149 net/igc: not in enabled drivers build config 00:07:13.149 net/ionic: not in enabled drivers build config 00:07:13.149 net/ipn3ke: not in enabled drivers build config 00:07:13.149 net/ixgbe: not in enabled drivers build config 00:07:13.149 net/mana: not in enabled drivers build config 00:07:13.149 net/memif: not in enabled drivers build config 00:07:13.149 net/mlx4: not in enabled drivers build config 00:07:13.149 net/mlx5: not in enabled drivers build config 00:07:13.149 net/mvneta: not in enabled drivers build config 00:07:13.149 net/mvpp2: not in enabled drivers build config 00:07:13.149 net/netvsc: not in enabled drivers build config 00:07:13.149 net/nfb: not in enabled drivers build config 00:07:13.149 net/nfp: not in enabled drivers build config 00:07:13.149 net/ngbe: not in enabled drivers build config 00:07:13.149 net/null: not in enabled drivers build config 00:07:13.149 net/octeontx: not in enabled drivers build config 00:07:13.149 net/octeon_ep: not in enabled drivers build config 00:07:13.149 net/pcap: not in enabled drivers build config 00:07:13.149 net/pfe: not in enabled drivers build config 00:07:13.149 net/qede: not in enabled drivers build config 00:07:13.149 net/ring: not in enabled drivers build config 00:07:13.149 net/sfc: not in enabled drivers build config 00:07:13.149 net/softnic: not in enabled drivers build config 00:07:13.149 net/tap: not in enabled drivers build config 00:07:13.149 net/thunderx: not in enabled drivers build config 00:07:13.149 net/txgbe: not in enabled drivers build config 00:07:13.149 net/vdev_netvsc: not in enabled drivers build config 00:07:13.149 net/vhost: not in enabled drivers build config 00:07:13.149 net/virtio: not in enabled drivers build config 00:07:13.149 net/vmxnet3: not in enabled drivers build config 00:07:13.149 raw/cnxk_bphy: not in enabled drivers build config 00:07:13.149 raw/cnxk_gpio: not in enabled drivers build config 00:07:13.149 raw/dpaa2_cmdif: not in enabled drivers build config 00:07:13.149 raw/ifpga: not in enabled drivers build config 00:07:13.149 raw/ntb: not in enabled drivers build config 00:07:13.149 raw/skeleton: not in enabled drivers build config 00:07:13.149 crypto/armv8: not in enabled drivers build config 00:07:13.149 crypto/bcmfs: not in enabled drivers build config 00:07:13.149 crypto/caam_jr: not in enabled drivers build config 00:07:13.149 crypto/ccp: not in enabled drivers build config 00:07:13.149 crypto/cnxk: not in enabled drivers build config 00:07:13.149 crypto/dpaa_sec: not in enabled drivers build config 00:07:13.149 crypto/dpaa2_sec: not in enabled drivers build config 00:07:13.149 crypto/ipsec_mb: not in enabled drivers build config 00:07:13.149 crypto/mlx5: not in enabled drivers build config 00:07:13.149 crypto/mvsam: not in enabled drivers build config 00:07:13.149 crypto/nitrox: not in enabled drivers build config 00:07:13.149 crypto/null: not in enabled drivers build config 00:07:13.149 crypto/octeontx: not in enabled drivers build config 00:07:13.149 crypto/openssl: not in enabled drivers build config 00:07:13.149 crypto/scheduler: not in enabled drivers build config 00:07:13.149 crypto/uadk: not in enabled drivers build config 00:07:13.149 crypto/virtio: not in enabled drivers build config 00:07:13.149 compress/isal: not in enabled drivers build config 00:07:13.149 compress/mlx5: not in enabled drivers build config 00:07:13.149 compress/octeontx: not in enabled drivers build config 00:07:13.149 compress/zlib: not in enabled drivers build config 00:07:13.149 regex/mlx5: not in enabled drivers build config 00:07:13.149 regex/cn9k: not in enabled drivers build config 00:07:13.149 ml/cnxk: not in enabled drivers build config 00:07:13.149 vdpa/ifc: not in enabled drivers build config 00:07:13.149 vdpa/mlx5: not in enabled drivers build config 00:07:13.149 vdpa/nfp: not in enabled drivers build config 00:07:13.149 vdpa/sfc: not in enabled drivers build config 00:07:13.149 event/cnxk: not in enabled drivers build config 00:07:13.150 event/dlb2: not in enabled drivers build config 00:07:13.150 event/dpaa: not in enabled drivers build config 00:07:13.150 event/dpaa2: not in enabled drivers build config 00:07:13.150 event/dsw: not in enabled drivers build config 00:07:13.150 event/opdl: not in enabled drivers build config 00:07:13.150 event/skeleton: not in enabled drivers build config 00:07:13.150 event/sw: not in enabled drivers build config 00:07:13.150 event/octeontx: not in enabled drivers build config 00:07:13.150 baseband/acc: not in enabled drivers build config 00:07:13.150 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:07:13.150 baseband/fpga_lte_fec: not in enabled drivers build config 00:07:13.150 baseband/la12xx: not in enabled drivers build config 00:07:13.150 baseband/null: not in enabled drivers build config 00:07:13.150 baseband/turbo_sw: not in enabled drivers build config 00:07:13.150 gpu/cuda: not in enabled drivers build config 00:07:13.150 00:07:13.150 00:07:13.150 Build targets in project: 217 00:07:13.150 00:07:13.150 DPDK 23.11.0 00:07:13.150 00:07:13.150 User defined options 00:07:13.150 libdir : lib 00:07:13.150 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:13.150 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:07:13.150 c_link_args : 00:07:13.150 enable_docs : false 00:07:13.150 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:07:13.150 enable_kmods : false 00:07:13.150 machine : native 00:07:13.150 tests : false 00:07:13.150 00:07:13.150 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:13.150 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:07:13.150 13:38:12 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:07:13.417 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:07:13.417 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:13.417 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:13.417 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:13.417 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:13.417 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:13.417 [6/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:13.417 [7/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:13.679 [8/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:13.679 [9/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:13.679 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:13.679 [11/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:13.679 [12/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:13.679 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:13.679 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:13.679 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:13.679 [16/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:13.679 [17/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:13.679 [18/707] Linking static target lib/librte_kvargs.a 00:07:13.679 [19/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:13.679 [20/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:13.679 [21/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:13.679 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:13.679 [23/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:13.679 [24/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:13.679 [25/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:13.679 [26/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:13.679 [27/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:13.679 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:13.679 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:13.679 [30/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:13.679 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:13.679 [32/707] Linking static target lib/librte_pci.a 00:07:13.679 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:13.679 [34/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:13.679 [35/707] Linking static target lib/librte_log.a 00:07:13.936 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:13.936 [37/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:13.936 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:13.936 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:13.936 [40/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:13.936 [41/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:13.936 [42/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:13.936 [43/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:13.936 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:13.936 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:13.936 [46/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:13.936 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:13.936 [48/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:14.200 [49/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.200 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:14.200 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:14.200 [52/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:14.200 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:14.200 [54/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:14.200 [55/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:14.200 [56/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:14.200 [57/707] Linking static target lib/librte_meter.a 00:07:14.200 [58/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:14.200 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:14.200 [60/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:14.200 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:14.200 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:14.200 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:14.200 [64/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.200 [65/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:14.200 [66/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:14.200 [67/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:14.200 [68/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:14.200 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:14.200 [70/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:14.200 [71/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:14.200 [72/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:14.200 [73/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:14.200 [74/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:14.200 [75/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:14.200 [76/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:14.200 [77/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:14.200 [78/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:14.200 [79/707] Linking static target lib/librte_cmdline.a 00:07:14.200 [80/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:07:14.200 [81/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:07:14.200 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:14.200 [83/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:14.200 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:14.200 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:14.200 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:14.200 [87/707] Linking static target lib/librte_ring.a 00:07:14.200 [88/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:14.200 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:14.200 [90/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:14.200 [91/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:14.200 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:14.200 [93/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:07:14.200 [94/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:07:14.200 [95/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:07:14.200 [96/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:14.200 [97/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:07:14.200 [98/707] Linking static target lib/librte_metrics.a 00:07:14.200 [99/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:14.200 [100/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:07:14.200 [101/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:14.200 [102/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:07:14.200 [103/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:14.200 [104/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:07:14.200 [105/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:14.200 [106/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:14.200 [107/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:14.200 [108/707] Linking static target lib/librte_bitratestats.a 00:07:14.200 [109/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:14.200 [110/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:14.458 [111/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:14.458 [112/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:14.458 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:14.458 [114/707] Linking static target lib/librte_net.a 00:07:14.458 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:14.458 [116/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:07:14.458 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:14.458 [118/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:14.458 [119/707] Linking static target lib/librte_cfgfile.a 00:07:14.458 [120/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:14.458 [121/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:14.458 [122/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:14.458 [123/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:14.458 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:14.458 [125/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:14.458 [126/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.458 [127/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.458 [128/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:07:14.458 [129/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:14.458 [130/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:14.458 [131/707] Linking target lib/librte_log.so.24.0 00:07:14.458 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:14.458 [133/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:14.458 [134/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:14.458 [135/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:07:14.458 [136/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:14.458 [137/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:07:14.721 [138/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:14.721 [139/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.721 [140/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:07:14.721 [141/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:07:14.721 [142/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.721 [143/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:14.721 [144/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:14.721 [145/707] Linking static target lib/librte_timer.a 00:07:14.721 [146/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:14.721 [147/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:07:14.721 [148/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:07:14.721 [149/707] Linking static target lib/librte_bbdev.a 00:07:14.721 [150/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:14.721 [151/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:14.721 [152/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:14.721 [153/707] Linking static target lib/librte_mempool.a 00:07:14.721 [154/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:07:14.721 [155/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:14.721 [156/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:14.721 [157/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:07:14.721 [158/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:07:14.721 [159/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.721 [160/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:07:14.721 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:07:14.721 [162/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:07:14.721 [163/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:14.721 [164/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:07:14.721 [165/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:07:14.721 [166/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:14.721 [167/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.721 [168/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:14.721 [169/707] Linking static target lib/librte_jobstats.a 00:07:14.721 [170/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:07:14.721 [171/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:07:14.721 [172/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:14.721 [173/707] Linking target lib/librte_kvargs.so.24.0 00:07:14.721 [174/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:07:14.983 [175/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:07:14.983 [176/707] Linking static target lib/librte_compressdev.a 00:07:14.983 [177/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:07:14.983 [178/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:14.983 [179/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:14.983 [180/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:14.983 [181/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:14.983 [182/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.983 [183/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:07:14.983 [184/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:07:14.983 [185/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:07:14.983 [186/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:07:14.983 [187/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:07:14.983 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:07:14.983 [189/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:07:14.983 [190/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:07:14.983 [191/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:07:14.983 [192/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:14.983 [193/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:07:14.983 [194/707] Linking static target lib/librte_latencystats.a 00:07:14.983 [195/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:07:14.983 [196/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:14.983 [197/707] Linking static target lib/librte_dispatcher.a 00:07:14.983 [198/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:07:14.983 [199/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:07:14.983 [200/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:07:14.983 [201/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:07:14.983 [202/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:07:14.983 [203/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:07:14.983 [204/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:07:14.983 [205/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:07:14.983 [206/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:14.983 [207/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:14.983 [208/707] Linking static target lib/librte_rcu.a 00:07:14.983 [209/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:07:14.983 [210/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:14.983 [211/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:07:14.983 [212/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:07:14.983 [213/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:07:14.983 [214/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:07:15.244 [215/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:07:15.244 [216/707] Linking static target lib/librte_gro.a 00:07:15.244 [217/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:15.244 [218/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:07:15.244 [219/707] Linking static target lib/librte_stack.a 00:07:15.244 [220/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:15.244 [221/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:07:15.244 [222/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:07:15.244 [223/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:07:15.244 [224/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:15.244 [225/707] Linking static target lib/librte_dmadev.a 00:07:15.244 [226/707] Linking static target lib/librte_regexdev.a 00:07:15.244 [227/707] Linking static target lib/librte_gpudev.a 00:07:15.244 [228/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:15.244 [229/707] Linking static target lib/librte_distributor.a 00:07:15.244 [230/707] Linking static target lib/librte_eal.a 00:07:15.244 [231/707] Linking static target lib/librte_telemetry.a 00:07:15.244 [232/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:07:15.244 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:07:15.244 [234/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.244 [235/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:15.244 [236/707] Linking static target lib/librte_gso.a 00:07:15.244 [237/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:07:15.244 [238/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:07:15.244 [239/707] Linking static target lib/librte_mldev.a 00:07:15.244 [240/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:07:15.244 [241/707] Linking static target lib/librte_rawdev.a 00:07:15.244 [242/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:07:15.244 [243/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:15.244 [244/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:07:15.244 [245/707] Linking static target lib/librte_power.a 00:07:15.244 [246/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.244 [247/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:07:15.244 [248/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:07:15.244 [249/707] Linking static target lib/librte_ip_frag.a 00:07:15.244 [250/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:15.244 [251/707] Linking static target lib/librte_pcapng.a 00:07:15.244 [252/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:15.244 [253/707] Linking static target lib/librte_reorder.a 00:07:15.244 [254/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:15.244 [255/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:07:15.244 [256/707] Linking static target lib/librte_mbuf.a 00:07:15.244 [257/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.506 [258/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:07:15.506 [259/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:07:15.506 [260/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:07:15.506 [261/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:07:15.506 [262/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:07:15.506 [263/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.507 [264/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:07:15.507 [265/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:07:15.507 [266/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.507 [267/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:15.507 [268/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.507 [269/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.507 [270/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:15.507 [271/707] Linking static target lib/librte_security.a 00:07:15.507 [272/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.507 [273/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:07:15.507 [274/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:07:15.507 [275/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:15.507 [276/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:07:15.507 [277/707] Linking static target lib/librte_bpf.a 00:07:15.507 [278/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:07:15.507 [279/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:07:15.507 [280/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:15.507 [281/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:15.507 [282/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:07:15.507 [283/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.507 [284/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:07:15.507 [285/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.773 [286/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:07:15.773 [287/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:15.773 [288/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.773 [289/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:07:15.773 [290/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:07:15.773 [291/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.773 [292/707] Linking static target lib/librte_rib.a 00:07:15.773 [293/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:07:15.773 [294/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:07:15.773 [295/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:07:15.773 [296/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:15.773 [297/707] Linking static target lib/librte_lpm.a 00:07:15.773 [298/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:07:15.773 [299/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.773 [300/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:07:15.773 [301/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.773 [302/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.773 [303/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.773 [304/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:15.773 [305/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:07:15.773 [306/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:07:15.773 [307/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.773 [308/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:07:15.773 [309/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:07:15.773 [310/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:07:15.773 [311/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:07:15.773 [312/707] Linking target lib/librte_telemetry.so.24.0 00:07:15.773 [313/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:07:15.773 [314/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.034 [315/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:07:16.034 [316/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:07:16.034 [317/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:07:16.034 [318/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:07:16.034 [319/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:07:16.034 [320/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:07:16.034 [321/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:07:16.034 [322/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:07:16.034 [323/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:07:16.034 [324/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.034 [325/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:07:16.034 [326/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:07:16.034 [327/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:07:16.034 [328/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:07:16.034 [329/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:07:16.034 [330/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.034 [331/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:07:16.034 [332/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:07:16.034 [333/707] Linking static target lib/librte_efd.a 00:07:16.034 [334/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:07:16.034 [335/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:07:16.034 [336/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:07:16.034 [337/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:07:16.034 [338/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:07:16.034 [339/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:07:16.034 [340/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:07:16.034 [341/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:07:16.034 [342/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:07:16.034 [343/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:07:16.298 [344/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:07:16.298 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:07:16.298 [346/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:07:16.298 [347/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:07:16.298 [348/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:07:16.298 [349/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.298 [350/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:07:16.298 [351/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.298 [352/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.298 [353/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.298 [354/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:16.298 [355/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:07:16.298 [356/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:07:16.298 [357/707] Linking static target lib/librte_fib.a 00:07:16.298 [358/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:07:16.298 [359/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:07:16.298 [360/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:07:16.298 [361/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:07:16.298 [362/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.298 [363/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:07:16.298 [364/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.298 [365/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:16.298 [366/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:07:16.298 [367/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:16.298 [368/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:07:16.298 [369/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:16.298 [370/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:07:16.298 [371/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:07:16.298 [372/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.561 [373/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:16.561 [374/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:16.561 [375/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:07:16.561 [376/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:07:16.561 [377/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:16.561 [378/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:16.561 [379/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.561 [380/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:16.561 [381/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:07:16.561 [382/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:07:16.561 [383/707] Linking static target lib/librte_graph.a 00:07:16.561 [384/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:07:16.561 [385/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:07:16.561 [386/707] Linking static target lib/librte_pdump.a 00:07:16.561 [387/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:07:16.561 [388/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:07:16.561 [389/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:07:16.561 [390/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:07:16.561 [391/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:07:16.561 [392/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:07:16.561 [393/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:07:16.561 [394/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:07:16.561 [395/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:16.561 [396/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:07:16.561 [397/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:07:16.823 [398/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:16.823 [399/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:07:16.823 [400/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:07:16.823 [401/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:07:16.823 [402/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:07:16.823 [403/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:07:16.823 [404/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:07:16.823 [405/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:07:16.823 [406/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:07:16.823 [407/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:16.823 [408/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:07:16.823 [409/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:16.823 [410/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:07:16.823 [411/707] Linking static target drivers/librte_bus_vdev.a 00:07:16.823 [412/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:07:16.823 [413/707] Linking static target lib/librte_table.a 00:07:16.823 [414/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:07:16.823 [415/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:07:16.823 [416/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:07:16.823 [417/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:16.823 [418/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.823 [419/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:07:16.823 [420/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:07:16.823 [421/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:07:16.823 [422/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:07:16.823 [423/707] Linking static target lib/librte_sched.a 00:07:16.823 [424/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:07:16.823 [425/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:07:16.823 [426/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:07:16.823 [427/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:07:16.823 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:07:17.082 [429/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:07:17.082 [430/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.082 [431/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:07:17.082 [432/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:17.082 [433/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:07:17.082 [434/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:17.082 [435/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:07:17.082 [436/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:17.082 [437/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:07:17.082 [438/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:07:17.082 [439/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:07:17.082 [440/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:17.082 [441/707] Linking static target lib/librte_cryptodev.a 00:07:17.082 [442/707] Linking static target drivers/librte_bus_pci.a 00:07:17.082 [443/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:07:17.082 [444/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:07:17.082 [445/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:07:17.082 [446/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:07:17.082 [447/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:07:17.082 [448/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:07:17.082 [449/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:07:17.341 [450/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:07:17.341 [451/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.341 [452/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:07:17.341 [453/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:07:17.341 [454/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:17.341 [455/707] Linking static target lib/librte_ipsec.a 00:07:17.341 [456/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:07:17.341 [457/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:07:17.341 [458/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:07:17.341 [459/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:17.341 [460/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:07:17.341 [461/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.341 [462/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:07:17.341 [463/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:07:17.341 [464/707] Linking static target lib/librte_member.a 00:07:17.341 [465/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:07:17.341 [466/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:07:17.341 [467/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:07:17.341 [468/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:07:17.341 [469/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:07:17.341 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:07:17.341 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:07:17.341 [472/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:07:17.341 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:07:17.341 [474/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:07:17.341 [475/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:07:17.341 [476/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:07:17.341 [477/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:07:17.341 [478/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:07:17.341 [479/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:07:17.341 [480/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:07:17.341 [481/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:07:17.341 [482/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:07:17.341 [483/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:07:17.600 [484/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:07:17.600 [485/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.600 [486/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:07:17.600 [487/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:07:17.600 [488/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:07:17.600 [489/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.600 [490/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:07:17.600 [491/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:07:17.600 [492/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:07:17.600 [493/707] Linking static target lib/librte_node.a 00:07:17.600 [494/707] Linking static target lib/librte_pdcp.a 00:07:17.600 [495/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:07:17.600 [496/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:07:17.600 [497/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:17.600 [498/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:07:17.600 [499/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:07:17.600 [500/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:17.600 [501/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:17.600 [502/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:17.600 [503/707] Linking static target drivers/librte_mempool_ring.a 00:07:17.600 [504/707] Linking static target lib/librte_hash.a 00:07:17.600 [505/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:07:17.600 [506/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:07:17.600 [507/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:07:17.600 [508/707] Linking static target lib/librte_port.a 00:07:17.600 [509/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.600 [510/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:07:17.600 [511/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:07:17.600 [512/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.600 [513/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:07:17.600 [514/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:07:17.600 [515/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:07:17.600 [516/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:07:17.600 [517/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:07:17.600 [518/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:07:17.600 [519/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.600 [520/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.600 [521/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:07:17.600 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:07:17.600 [523/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:07:17.600 [524/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:07:17.859 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:07:17.859 [526/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:07:17.859 [527/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:07:17.859 [528/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:07:17.859 [529/707] Linking static target lib/acl/libavx2_tmp.a 00:07:17.859 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:07:17.859 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:07:17.859 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:07:17.859 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:07:17.859 [534/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.859 [535/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.859 [536/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:07:17.859 [537/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:07:17.859 [538/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:07:17.859 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:07:17.859 [540/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:07:17.859 [541/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:07:17.859 [542/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:07:17.859 [543/707] Linking static target lib/librte_eventdev.a 00:07:17.859 [544/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:07:17.859 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:07:18.119 [546/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:07:18.119 [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:07:18.119 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:07:18.119 [549/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:07:18.119 [550/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:07:18.119 [551/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:07:18.119 [552/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:07:18.119 [553/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:07:18.119 [554/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:07:18.119 [555/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:07:18.119 [556/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:07:18.119 [557/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:07:18.119 [558/707] Linking static target lib/librte_acl.a 00:07:18.119 [559/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:07:18.119 [560/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:07:18.119 [561/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:07:18.119 [562/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:07:18.119 [563/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.378 [564/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.378 [565/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:07:18.378 [566/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:07:18.378 [567/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:07:18.378 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:07:18.637 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:07:18.637 [570/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.637 [571/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:07:18.637 [572/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.896 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:07:18.896 [574/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:18.896 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:07:18.896 [576/707] Linking static target lib/librte_ethdev.a 00:07:19.155 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:07:19.414 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:07:19.414 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:07:19.672 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:07:19.930 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:07:19.930 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:07:19.930 [583/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:07:20.188 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:07:20.188 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:07:20.188 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:07:20.445 [587/707] Linking static target drivers/librte_net_i40e.a 00:07:20.702 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:21.267 [589/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.267 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:07:21.268 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.201 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:07:25.486 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.486 [594/707] Linking target lib/librte_eal.so.24.0 00:07:25.486 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:07:25.486 [596/707] Linking target lib/librte_meter.so.24.0 00:07:25.486 [597/707] Linking target drivers/librte_bus_vdev.so.24.0 00:07:25.486 [598/707] Linking target lib/librte_cfgfile.so.24.0 00:07:25.486 [599/707] Linking target lib/librte_rawdev.so.24.0 00:07:25.486 [600/707] Linking target lib/librte_pci.so.24.0 00:07:25.486 [601/707] Linking target lib/librte_ring.so.24.0 00:07:25.486 [602/707] Linking target lib/librte_timer.so.24.0 00:07:25.486 [603/707] Linking target lib/librte_acl.so.24.0 00:07:25.486 [604/707] Linking target lib/librte_stack.so.24.0 00:07:25.486 [605/707] Linking target lib/librte_dmadev.so.24.0 00:07:25.486 [606/707] Linking target lib/librte_jobstats.so.24.0 00:07:25.744 [607/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:07:25.744 [608/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:07:25.744 [609/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:07:25.744 [610/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:07:25.744 [611/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:07:25.744 [612/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:07:25.744 [613/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:07:25.744 [614/707] Linking target lib/librte_mempool.so.24.0 00:07:25.744 [615/707] Linking target lib/librte_rcu.so.24.0 00:07:25.744 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:07:25.744 [617/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:07:25.744 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:07:25.744 [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:07:25.744 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:07:25.744 [621/707] Linking target lib/librte_rib.so.24.0 00:07:25.744 [622/707] Linking target lib/librte_mbuf.so.24.0 00:07:26.001 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:07:26.001 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:07:26.001 [625/707] Linking target lib/librte_fib.so.24.0 00:07:26.001 [626/707] Linking target lib/librte_gpudev.so.24.0 00:07:26.001 [627/707] Linking target lib/librte_bbdev.so.24.0 00:07:26.001 [628/707] Linking target lib/librte_net.so.24.0 00:07:26.001 [629/707] Linking target lib/librte_sched.so.24.0 00:07:26.001 [630/707] Linking target lib/librte_compressdev.so.24.0 00:07:26.001 [631/707] Linking target lib/librte_regexdev.so.24.0 00:07:26.001 [632/707] Linking target lib/librte_reorder.so.24.0 00:07:26.001 [633/707] Linking target lib/librte_distributor.so.24.0 00:07:26.001 [634/707] Linking target lib/librte_mldev.so.24.0 00:07:26.001 [635/707] Linking target lib/librte_cryptodev.so.24.0 00:07:26.259 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:07:26.259 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:07:26.259 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:07:26.259 [639/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:07:26.259 [640/707] Linking target lib/librte_hash.so.24.0 00:07:26.259 [641/707] Linking target lib/librte_cmdline.so.24.0 00:07:26.259 [642/707] Linking target lib/librte_security.so.24.0 00:07:26.259 [643/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:07:26.259 [644/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:07:26.259 [645/707] Linking target lib/librte_pdcp.so.24.0 00:07:26.259 [646/707] Linking target lib/librte_efd.so.24.0 00:07:26.259 [647/707] Linking target lib/librte_lpm.so.24.0 00:07:26.259 [648/707] Linking target lib/librte_ipsec.so.24.0 00:07:26.259 [649/707] Linking target lib/librte_member.so.24.0 00:07:26.517 [650/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.517 [651/707] Linking target lib/librte_ethdev.so.24.0 00:07:26.517 [652/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:07:26.517 [653/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:07:26.517 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:07:26.517 [655/707] Linking target lib/librte_gro.so.24.0 00:07:26.517 [656/707] Linking target lib/librte_metrics.so.24.0 00:07:26.517 [657/707] Linking target lib/librte_bpf.so.24.0 00:07:26.517 [658/707] Linking target lib/librte_gso.so.24.0 00:07:26.517 [659/707] Linking target lib/librte_pcapng.so.24.0 00:07:26.517 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:07:26.517 [661/707] Linking target lib/librte_power.so.24.0 00:07:26.776 [662/707] Linking target lib/librte_eventdev.so.24.0 00:07:26.776 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:07:26.776 [664/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:07:26.776 [665/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:07:26.776 [666/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:07:26.776 [667/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:07:26.776 [668/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:07:26.776 [669/707] Linking target lib/librte_latencystats.so.24.0 00:07:26.776 [670/707] Linking target lib/librte_dispatcher.so.24.0 00:07:26.776 [671/707] Linking target lib/librte_bitratestats.so.24.0 00:07:26.776 [672/707] Linking target lib/librte_port.so.24.0 00:07:26.776 [673/707] Linking target lib/librte_graph.so.24.0 00:07:26.776 [674/707] Linking target lib/librte_pdump.so.24.0 00:07:27.034 [675/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:07:27.034 [676/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:07:27.034 [677/707] Linking target lib/librte_table.so.24.0 00:07:27.034 [678/707] Linking target lib/librte_node.so.24.0 00:07:27.034 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:07:27.602 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:07:27.602 [681/707] Linking static target lib/librte_pipeline.a 00:07:27.860 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:27.860 [683/707] Linking static target lib/librte_vhost.a 00:07:28.118 [684/707] Linking target app/dpdk-proc-info 00:07:28.118 [685/707] Linking target app/dpdk-graph 00:07:28.118 [686/707] Linking target app/dpdk-dumpcap 00:07:28.118 [687/707] Linking target app/dpdk-test-sad 00:07:28.118 [688/707] Linking target app/dpdk-test-mldev 00:07:28.118 [689/707] Linking target app/dpdk-test-security-perf 00:07:28.118 [690/707] Linking target app/dpdk-test-regex 00:07:28.118 [691/707] Linking target app/dpdk-test-dma-perf 00:07:28.118 [692/707] Linking target app/dpdk-test-crypto-perf 00:07:28.118 [693/707] Linking target app/dpdk-test-cmdline 00:07:28.118 [694/707] Linking target app/dpdk-pdump 00:07:28.118 [695/707] Linking target app/dpdk-test-bbdev 00:07:28.118 [696/707] Linking target app/dpdk-test-gpudev 00:07:28.118 [697/707] Linking target app/dpdk-test-compress-perf 00:07:28.118 [698/707] Linking target app/dpdk-test-eventdev 00:07:28.118 [699/707] Linking target app/dpdk-test-pipeline 00:07:28.118 [700/707] Linking target app/dpdk-test-fib 00:07:28.118 [701/707] Linking target app/dpdk-test-acl 00:07:28.118 [702/707] Linking target app/dpdk-test-flow-perf 00:07:28.377 [703/707] Linking target app/dpdk-testpmd 00:07:29.756 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.756 [705/707] Linking target lib/librte_vhost.so.24.0 00:07:33.184 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.184 [707/707] Linking target lib/librte_pipeline.so.24.0 00:07:33.184 13:38:32 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:07:33.184 13:38:32 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:07:33.184 13:38:32 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:07:33.184 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:07:33.184 [0/1] Installing files. 00:07:33.184 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:07:33.184 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:07:33.184 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:07:33.184 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:07:33.184 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:07:33.184 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:07:33.184 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:07:33.184 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:07:33.184 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.185 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:07:33.186 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:07:33.187 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.188 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:07:33.189 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:07:33.190 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:07:33.190 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.190 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:07:33.191 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:07:33.191 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:07:33.191 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.191 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:07:33.191 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.191 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.191 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.191 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.191 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.452 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.453 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.454 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.455 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:07:33.456 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:07:33.456 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so.24 00:07:33.456 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so 00:07:33.456 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:07:33.456 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:07:33.456 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:07:33.456 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:07:33.456 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:07:33.456 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:07:33.456 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:07:33.456 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:07:33.456 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:07:33.456 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:07:33.456 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:07:33.456 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:07:33.456 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:07:33.456 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:07:33.456 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.24 00:07:33.456 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:07:33.456 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:07:33.456 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:07:33.456 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:07:33.456 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:07:33.456 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:07:33.456 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:07:33.456 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:07:33.456 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:07:33.456 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:07:33.456 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:07:33.456 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:07:33.456 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:07:33.456 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:07:33.456 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:07:33.456 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:07:33.456 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:07:33.456 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:07:33.456 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:07:33.456 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:07:33.456 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:07:33.456 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:07:33.456 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:07:33.456 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:07:33.456 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:07:33.457 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:07:33.457 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:07:33.457 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:07:33.457 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:07:33.457 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:07:33.457 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:07:33.457 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:07:33.457 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:07:33.457 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:07:33.457 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:07:33.457 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:07:33.457 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:07:33.457 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:07:33.457 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:07:33.457 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:07:33.457 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:07:33.457 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:07:33.457 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:07:33.457 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:07:33.457 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:07:33.457 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:07:33.457 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:07:33.457 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:07:33.457 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:07:33.457 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:07:33.457 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:07:33.457 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:07:33.457 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:07:33.457 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:07:33.457 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:07:33.457 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:07:33.457 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:07:33.457 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:07:33.457 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:07:33.457 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:07:33.457 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:07:33.457 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:07:33.457 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:07:33.457 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:07:33.457 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:07:33.457 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.24 00:07:33.457 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:07:33.457 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:07:33.457 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:07:33.457 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.24 00:07:33.457 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:07:33.457 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:07:33.457 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:07:33.457 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:07:33.457 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:07:33.457 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:07:33.457 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so 00:07:33.457 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:07:33.457 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:07:33.457 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:07:33.457 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:07:33.457 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:07:33.457 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:07:33.457 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.24 00:07:33.457 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:07:33.457 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:07:33.457 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:07:33.457 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:07:33.457 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:07:33.457 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:07:33.457 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:07:33.457 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:07:33.457 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:07:33.457 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:07:33.457 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:07:33.457 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.24 00:07:33.457 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:07:33.457 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:07:33.457 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:07:33.457 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.24 00:07:33.457 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:07:33.457 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:07:33.457 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:07:33.457 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:07:33.457 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:07:33.457 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.24 00:07:33.457 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:07:33.457 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:07:33.457 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:07:33.457 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:07:33.457 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:07:33.458 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:07:33.458 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:07:33.458 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:07:33.458 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:07:33.458 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:07:33.458 13:38:33 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:07:33.458 13:38:33 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:33.458 00:07:33.458 real 0m26.108s 00:07:33.458 user 7m28.649s 00:07:33.458 sys 2m11.110s 00:07:33.458 13:38:33 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:33.458 13:38:33 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:07:33.458 ************************************ 00:07:33.458 END TEST build_native_dpdk 00:07:33.458 ************************************ 00:07:33.458 13:38:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:33.458 13:38:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:33.458 13:38:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:33.458 13:38:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:33.458 13:38:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:33.458 13:38:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:33.458 13:38:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:33.458 13:38:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:07:33.716 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:07:33.716 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:33.716 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:33.716 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:33.974 Using 'verbs' RDMA provider 00:07:47.119 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:59.328 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:59.328 Creating mk/config.mk...done. 00:07:59.328 Creating mk/cc.flags.mk...done. 00:07:59.328 Type 'make' to build. 00:07:59.328 13:38:58 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:07:59.328 13:38:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:59.328 13:38:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:59.328 13:38:58 -- common/autotest_common.sh@10 -- $ set +x 00:07:59.328 ************************************ 00:07:59.328 START TEST make 00:07:59.328 ************************************ 00:07:59.328 13:38:58 make -- common/autotest_common.sh@1129 -- $ make -j112 00:07:59.328 make[1]: Nothing to be done for 'all'. 00:08:31.414 CC lib/ut/ut.o 00:08:31.414 CC lib/log/log.o 00:08:31.414 CC lib/log/log_flags.o 00:08:31.414 CC lib/log/log_deprecated.o 00:08:31.414 CC lib/ut_mock/mock.o 00:08:31.414 LIB libspdk_log.a 00:08:31.414 LIB libspdk_ut.a 00:08:31.414 LIB libspdk_ut_mock.a 00:08:31.414 SO libspdk_ut.so.2.0 00:08:31.414 SO libspdk_log.so.7.1 00:08:31.414 SO libspdk_ut_mock.so.6.0 00:08:31.414 SYMLINK libspdk_ut.so 00:08:31.414 SYMLINK libspdk_ut_mock.so 00:08:31.414 SYMLINK libspdk_log.so 00:08:31.414 CC lib/ioat/ioat.o 00:08:31.414 CC lib/dma/dma.o 00:08:31.414 CXX lib/trace_parser/trace.o 00:08:31.414 CC lib/util/base64.o 00:08:31.414 CC lib/util/bit_array.o 00:08:31.414 CC lib/util/cpuset.o 00:08:31.414 CC lib/util/crc16.o 00:08:31.414 CC lib/util/crc32.o 00:08:31.414 CC lib/util/crc32c.o 00:08:31.414 CC lib/util/crc32_ieee.o 00:08:31.414 CC lib/util/crc64.o 00:08:31.414 CC lib/util/dif.o 00:08:31.414 CC lib/util/fd.o 00:08:31.414 CC lib/util/fd_group.o 00:08:31.414 CC lib/util/file.o 00:08:31.414 CC lib/util/hexlify.o 00:08:31.414 CC lib/util/iov.o 00:08:31.414 CC lib/util/math.o 00:08:31.414 CC lib/util/net.o 00:08:31.414 CC lib/util/pipe.o 00:08:31.414 CC lib/util/strerror_tls.o 00:08:31.414 CC lib/util/string.o 00:08:31.414 CC lib/util/uuid.o 00:08:31.414 CC lib/util/xor.o 00:08:31.414 CC lib/util/zipf.o 00:08:31.414 CC lib/util/md5.o 00:08:31.414 CC lib/vfio_user/host/vfio_user.o 00:08:31.414 CC lib/vfio_user/host/vfio_user_pci.o 00:08:31.414 LIB libspdk_dma.a 00:08:31.414 SO libspdk_dma.so.5.0 00:08:31.414 LIB libspdk_ioat.a 00:08:31.414 SYMLINK libspdk_dma.so 00:08:31.414 SO libspdk_ioat.so.7.0 00:08:31.414 SYMLINK libspdk_ioat.so 00:08:31.414 LIB libspdk_vfio_user.a 00:08:31.414 SO libspdk_vfio_user.so.5.0 00:08:31.414 LIB libspdk_util.a 00:08:31.414 SYMLINK libspdk_vfio_user.so 00:08:31.414 SO libspdk_util.so.10.1 00:08:31.414 SYMLINK libspdk_util.so 00:08:31.414 LIB libspdk_trace_parser.a 00:08:31.414 SO libspdk_trace_parser.so.6.0 00:08:31.414 SYMLINK libspdk_trace_parser.so 00:08:31.414 CC lib/rdma_utils/rdma_utils.o 00:08:31.414 CC lib/idxd/idxd.o 00:08:31.414 CC lib/idxd/idxd_user.o 00:08:31.414 CC lib/idxd/idxd_kernel.o 00:08:31.414 CC lib/json/json_parse.o 00:08:31.414 CC lib/conf/conf.o 00:08:31.414 CC lib/json/json_util.o 00:08:31.414 CC lib/vmd/vmd.o 00:08:31.414 CC lib/vmd/led.o 00:08:31.414 CC lib/json/json_write.o 00:08:31.414 CC lib/env_dpdk/env.o 00:08:31.414 CC lib/env_dpdk/memory.o 00:08:31.414 CC lib/env_dpdk/pci.o 00:08:31.414 CC lib/env_dpdk/init.o 00:08:31.414 CC lib/env_dpdk/threads.o 00:08:31.414 CC lib/env_dpdk/pci_ioat.o 00:08:31.414 CC lib/env_dpdk/pci_virtio.o 00:08:31.414 CC lib/env_dpdk/pci_vmd.o 00:08:31.414 CC lib/env_dpdk/pci_idxd.o 00:08:31.414 CC lib/env_dpdk/pci_event.o 00:08:31.414 CC lib/env_dpdk/sigbus_handler.o 00:08:31.414 CC lib/env_dpdk/pci_dpdk.o 00:08:31.414 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:31.414 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:31.414 LIB libspdk_conf.a 00:08:31.414 LIB libspdk_rdma_utils.a 00:08:31.414 SO libspdk_conf.so.6.0 00:08:31.414 LIB libspdk_json.a 00:08:31.414 SO libspdk_rdma_utils.so.1.0 00:08:31.414 SYMLINK libspdk_conf.so 00:08:31.414 SO libspdk_json.so.6.0 00:08:31.414 SYMLINK libspdk_rdma_utils.so 00:08:31.414 SYMLINK libspdk_json.so 00:08:31.414 LIB libspdk_idxd.a 00:08:31.414 SO libspdk_idxd.so.12.1 00:08:31.414 LIB libspdk_vmd.a 00:08:31.414 SO libspdk_vmd.so.6.0 00:08:31.414 SYMLINK libspdk_idxd.so 00:08:31.414 SYMLINK libspdk_vmd.so 00:08:31.414 CC lib/rdma_provider/common.o 00:08:31.414 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:31.414 CC lib/jsonrpc/jsonrpc_server.o 00:08:31.414 CC lib/jsonrpc/jsonrpc_client.o 00:08:31.414 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:31.414 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:31.414 LIB libspdk_rdma_provider.a 00:08:31.414 SO libspdk_rdma_provider.so.7.0 00:08:31.414 LIB libspdk_jsonrpc.a 00:08:31.414 SYMLINK libspdk_rdma_provider.so 00:08:31.414 SO libspdk_jsonrpc.so.6.0 00:08:31.414 LIB libspdk_env_dpdk.a 00:08:31.414 SYMLINK libspdk_jsonrpc.so 00:08:31.414 SO libspdk_env_dpdk.so.15.1 00:08:31.414 SYMLINK libspdk_env_dpdk.so 00:08:31.414 CC lib/rpc/rpc.o 00:08:31.414 LIB libspdk_rpc.a 00:08:31.414 SO libspdk_rpc.so.6.0 00:08:31.414 SYMLINK libspdk_rpc.so 00:08:31.414 CC lib/notify/notify.o 00:08:31.414 CC lib/notify/notify_rpc.o 00:08:31.414 CC lib/keyring/keyring.o 00:08:31.414 CC lib/keyring/keyring_rpc.o 00:08:31.414 CC lib/trace/trace.o 00:08:31.414 CC lib/trace/trace_flags.o 00:08:31.414 CC lib/trace/trace_rpc.o 00:08:31.414 LIB libspdk_notify.a 00:08:31.414 SO libspdk_notify.so.6.0 00:08:31.414 LIB libspdk_trace.a 00:08:31.414 LIB libspdk_keyring.a 00:08:31.414 SO libspdk_trace.so.11.0 00:08:31.414 SO libspdk_keyring.so.2.0 00:08:31.414 SYMLINK libspdk_notify.so 00:08:31.414 SYMLINK libspdk_trace.so 00:08:31.414 SYMLINK libspdk_keyring.so 00:08:31.414 CC lib/thread/thread.o 00:08:31.414 CC lib/thread/iobuf.o 00:08:31.414 CC lib/sock/sock.o 00:08:31.414 CC lib/sock/sock_rpc.o 00:08:31.414 LIB libspdk_sock.a 00:08:31.414 SO libspdk_sock.so.10.0 00:08:31.414 SYMLINK libspdk_sock.so 00:08:31.414 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:31.414 CC lib/nvme/nvme_ctrlr.o 00:08:31.414 CC lib/nvme/nvme_fabric.o 00:08:31.414 CC lib/nvme/nvme_ns_cmd.o 00:08:31.414 CC lib/nvme/nvme_ns.o 00:08:31.414 CC lib/nvme/nvme_pcie_common.o 00:08:31.414 CC lib/nvme/nvme_pcie.o 00:08:31.414 CC lib/nvme/nvme_qpair.o 00:08:31.414 CC lib/nvme/nvme.o 00:08:31.414 CC lib/nvme/nvme_quirks.o 00:08:31.414 CC lib/nvme/nvme_transport.o 00:08:31.414 CC lib/nvme/nvme_discovery.o 00:08:31.414 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:31.414 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:31.414 CC lib/nvme/nvme_tcp.o 00:08:31.414 CC lib/nvme/nvme_opal.o 00:08:31.414 CC lib/nvme/nvme_io_msg.o 00:08:31.414 CC lib/nvme/nvme_poll_group.o 00:08:31.414 CC lib/nvme/nvme_zns.o 00:08:31.414 CC lib/nvme/nvme_stubs.o 00:08:31.414 CC lib/nvme/nvme_auth.o 00:08:31.414 CC lib/nvme/nvme_cuse.o 00:08:31.414 CC lib/nvme/nvme_rdma.o 00:08:31.414 LIB libspdk_thread.a 00:08:31.414 SO libspdk_thread.so.11.0 00:08:31.414 SYMLINK libspdk_thread.so 00:08:31.672 CC lib/accel/accel.o 00:08:31.672 CC lib/accel/accel_rpc.o 00:08:31.672 CC lib/accel/accel_sw.o 00:08:31.672 CC lib/init/json_config.o 00:08:31.672 CC lib/init/subsystem_rpc.o 00:08:31.672 CC lib/init/rpc.o 00:08:31.672 CC lib/init/subsystem.o 00:08:31.672 CC lib/blob/request.o 00:08:31.672 CC lib/blob/blobstore.o 00:08:31.672 CC lib/blob/zeroes.o 00:08:31.672 CC lib/blob/blob_bs_dev.o 00:08:31.672 CC lib/virtio/virtio.o 00:08:31.672 CC lib/virtio/virtio_vhost_user.o 00:08:31.672 CC lib/virtio/virtio_vfio_user.o 00:08:31.672 CC lib/virtio/virtio_pci.o 00:08:31.672 CC lib/fsdev/fsdev.o 00:08:31.672 CC lib/fsdev/fsdev_io.o 00:08:31.672 CC lib/fsdev/fsdev_rpc.o 00:08:31.672 LIB libspdk_init.a 00:08:31.931 SO libspdk_init.so.6.0 00:08:31.931 LIB libspdk_virtio.a 00:08:31.931 SO libspdk_virtio.so.7.0 00:08:31.931 SYMLINK libspdk_init.so 00:08:31.931 SYMLINK libspdk_virtio.so 00:08:31.931 LIB libspdk_fsdev.a 00:08:32.189 SO libspdk_fsdev.so.2.0 00:08:32.189 SYMLINK libspdk_fsdev.so 00:08:32.189 CC lib/event/app.o 00:08:32.189 CC lib/event/reactor.o 00:08:32.189 CC lib/event/scheduler_static.o 00:08:32.189 CC lib/event/log_rpc.o 00:08:32.189 CC lib/event/app_rpc.o 00:08:32.189 LIB libspdk_accel.a 00:08:32.448 SO libspdk_accel.so.16.0 00:08:32.448 SYMLINK libspdk_accel.so 00:08:32.448 LIB libspdk_nvme.a 00:08:32.448 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:32.448 LIB libspdk_event.a 00:08:32.448 SO libspdk_nvme.so.15.0 00:08:32.448 SO libspdk_event.so.14.0 00:08:32.707 SYMLINK libspdk_event.so 00:08:32.707 CC lib/bdev/bdev.o 00:08:32.707 CC lib/bdev/bdev_rpc.o 00:08:32.707 CC lib/bdev/bdev_zone.o 00:08:32.707 CC lib/bdev/part.o 00:08:32.707 CC lib/bdev/scsi_nvme.o 00:08:32.707 SYMLINK libspdk_nvme.so 00:08:32.965 LIB libspdk_fuse_dispatcher.a 00:08:32.965 SO libspdk_fuse_dispatcher.so.1.0 00:08:32.965 SYMLINK libspdk_fuse_dispatcher.so 00:08:33.530 LIB libspdk_blob.a 00:08:33.530 SO libspdk_blob.so.12.0 00:08:33.530 SYMLINK libspdk_blob.so 00:08:34.096 CC lib/lvol/lvol.o 00:08:34.096 CC lib/blobfs/blobfs.o 00:08:34.096 CC lib/blobfs/tree.o 00:08:34.354 LIB libspdk_bdev.a 00:08:34.354 SO libspdk_bdev.so.17.0 00:08:34.354 LIB libspdk_blobfs.a 00:08:34.354 SYMLINK libspdk_bdev.so 00:08:34.613 SO libspdk_blobfs.so.11.0 00:08:34.613 LIB libspdk_lvol.a 00:08:34.613 SO libspdk_lvol.so.11.0 00:08:34.613 SYMLINK libspdk_blobfs.so 00:08:34.613 SYMLINK libspdk_lvol.so 00:08:34.871 CC lib/nvmf/ctrlr.o 00:08:34.871 CC lib/nvmf/ctrlr_discovery.o 00:08:34.871 CC lib/nvmf/ctrlr_bdev.o 00:08:34.871 CC lib/nvmf/subsystem.o 00:08:34.871 CC lib/nvmf/nvmf.o 00:08:34.871 CC lib/nvmf/nvmf_rpc.o 00:08:34.871 CC lib/nvmf/transport.o 00:08:34.871 CC lib/nbd/nbd.o 00:08:34.871 CC lib/nvmf/tcp.o 00:08:34.871 CC lib/nbd/nbd_rpc.o 00:08:34.871 CC lib/nvmf/stubs.o 00:08:34.871 CC lib/ublk/ublk.o 00:08:34.871 CC lib/nvmf/mdns_server.o 00:08:34.871 CC lib/scsi/dev.o 00:08:34.871 CC lib/scsi/scsi.o 00:08:34.871 CC lib/ublk/ublk_rpc.o 00:08:34.871 CC lib/nvmf/rdma.o 00:08:34.871 CC lib/scsi/lun.o 00:08:34.871 CC lib/nvmf/auth.o 00:08:34.871 CC lib/scsi/port.o 00:08:34.871 CC lib/ftl/ftl_core.o 00:08:34.871 CC lib/ftl/ftl_init.o 00:08:34.871 CC lib/scsi/scsi_bdev.o 00:08:34.871 CC lib/ftl/ftl_layout.o 00:08:34.871 CC lib/scsi/scsi_pr.o 00:08:34.871 CC lib/scsi/scsi_rpc.o 00:08:34.871 CC lib/ftl/ftl_debug.o 00:08:34.871 CC lib/scsi/task.o 00:08:34.871 CC lib/ftl/ftl_io.o 00:08:34.871 CC lib/ftl/ftl_sb.o 00:08:34.871 CC lib/ftl/ftl_l2p.o 00:08:34.871 CC lib/ftl/ftl_l2p_flat.o 00:08:34.871 CC lib/ftl/ftl_nv_cache.o 00:08:34.871 CC lib/ftl/ftl_band.o 00:08:34.871 CC lib/ftl/ftl_band_ops.o 00:08:34.871 CC lib/ftl/ftl_writer.o 00:08:34.871 CC lib/ftl/ftl_rq.o 00:08:34.871 CC lib/ftl/ftl_reloc.o 00:08:34.871 CC lib/ftl/ftl_l2p_cache.o 00:08:34.871 CC lib/ftl/ftl_p2l.o 00:08:34.871 CC lib/ftl/ftl_p2l_log.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:34.871 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:34.871 CC lib/ftl/utils/ftl_md.o 00:08:34.871 CC lib/ftl/utils/ftl_conf.o 00:08:34.871 CC lib/ftl/utils/ftl_bitmap.o 00:08:34.871 CC lib/ftl/utils/ftl_mempool.o 00:08:34.871 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:34.871 CC lib/ftl/utils/ftl_property.o 00:08:34.871 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:34.871 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:34.871 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:34.871 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:34.871 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:34.871 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:34.871 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:34.871 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:34.871 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:34.871 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:34.871 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:34.871 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:34.871 CC lib/ftl/base/ftl_base_bdev.o 00:08:34.871 CC lib/ftl/base/ftl_base_dev.o 00:08:34.871 CC lib/ftl/ftl_trace.o 00:08:35.435 LIB libspdk_nbd.a 00:08:35.435 SO libspdk_nbd.so.7.0 00:08:35.435 LIB libspdk_ublk.a 00:08:35.435 SYMLINK libspdk_nbd.so 00:08:35.435 SO libspdk_ublk.so.3.0 00:08:35.435 LIB libspdk_scsi.a 00:08:35.435 SYMLINK libspdk_ublk.so 00:08:35.435 SO libspdk_scsi.so.9.0 00:08:35.692 SYMLINK libspdk_scsi.so 00:08:35.692 LIB libspdk_ftl.a 00:08:35.692 SO libspdk_ftl.so.9.0 00:08:35.949 CC lib/iscsi/conn.o 00:08:35.949 CC lib/iscsi/init_grp.o 00:08:35.949 CC lib/iscsi/iscsi.o 00:08:35.949 CC lib/iscsi/param.o 00:08:35.949 CC lib/iscsi/portal_grp.o 00:08:35.949 CC lib/vhost/vhost.o 00:08:35.949 CC lib/vhost/vhost_rpc.o 00:08:35.949 CC lib/iscsi/tgt_node.o 00:08:35.949 CC lib/iscsi/iscsi_subsystem.o 00:08:35.949 CC lib/iscsi/iscsi_rpc.o 00:08:35.949 CC lib/vhost/vhost_blk.o 00:08:35.949 CC lib/vhost/vhost_scsi.o 00:08:35.949 CC lib/iscsi/task.o 00:08:35.949 CC lib/vhost/rte_vhost_user.o 00:08:35.949 SYMLINK libspdk_ftl.so 00:08:36.516 LIB libspdk_nvmf.a 00:08:36.516 SO libspdk_nvmf.so.20.0 00:08:36.516 LIB libspdk_vhost.a 00:08:36.775 SYMLINK libspdk_nvmf.so 00:08:36.775 SO libspdk_vhost.so.8.0 00:08:36.775 SYMLINK libspdk_vhost.so 00:08:36.775 LIB libspdk_iscsi.a 00:08:36.775 SO libspdk_iscsi.so.8.0 00:08:37.034 SYMLINK libspdk_iscsi.so 00:08:37.601 CC module/env_dpdk/env_dpdk_rpc.o 00:08:37.601 LIB libspdk_env_dpdk_rpc.a 00:08:37.601 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:37.601 CC module/keyring/file/keyring.o 00:08:37.601 CC module/keyring/file/keyring_rpc.o 00:08:37.601 CC module/accel/iaa/accel_iaa.o 00:08:37.601 CC module/accel/iaa/accel_iaa_rpc.o 00:08:37.601 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:37.601 CC module/accel/error/accel_error.o 00:08:37.601 CC module/accel/error/accel_error_rpc.o 00:08:37.601 CC module/fsdev/aio/fsdev_aio.o 00:08:37.601 CC module/keyring/linux/keyring.o 00:08:37.601 CC module/scheduler/gscheduler/gscheduler.o 00:08:37.601 CC module/keyring/linux/keyring_rpc.o 00:08:37.601 CC module/blob/bdev/blob_bdev.o 00:08:37.601 CC module/fsdev/aio/linux_aio_mgr.o 00:08:37.601 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:37.601 CC module/accel/dsa/accel_dsa.o 00:08:37.601 CC module/accel/ioat/accel_ioat.o 00:08:37.601 CC module/sock/posix/posix.o 00:08:37.601 CC module/accel/ioat/accel_ioat_rpc.o 00:08:37.601 CC module/accel/dsa/accel_dsa_rpc.o 00:08:37.601 SO libspdk_env_dpdk_rpc.so.6.0 00:08:37.601 SYMLINK libspdk_env_dpdk_rpc.so 00:08:37.860 LIB libspdk_scheduler_gscheduler.a 00:08:37.860 LIB libspdk_scheduler_dpdk_governor.a 00:08:37.860 LIB libspdk_keyring_file.a 00:08:37.860 LIB libspdk_keyring_linux.a 00:08:37.860 SO libspdk_scheduler_gscheduler.so.4.0 00:08:37.860 SO libspdk_keyring_file.so.2.0 00:08:37.860 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:37.860 LIB libspdk_accel_error.a 00:08:37.860 LIB libspdk_accel_iaa.a 00:08:37.860 LIB libspdk_scheduler_dynamic.a 00:08:37.860 LIB libspdk_accel_ioat.a 00:08:37.860 SO libspdk_keyring_linux.so.1.0 00:08:37.860 SYMLINK libspdk_keyring_file.so 00:08:37.860 SO libspdk_accel_error.so.2.0 00:08:37.860 SYMLINK libspdk_scheduler_gscheduler.so 00:08:37.860 SO libspdk_accel_iaa.so.3.0 00:08:37.860 SO libspdk_scheduler_dynamic.so.4.0 00:08:37.860 SO libspdk_accel_ioat.so.6.0 00:08:37.860 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:37.860 LIB libspdk_blob_bdev.a 00:08:37.860 SYMLINK libspdk_keyring_linux.so 00:08:37.860 LIB libspdk_accel_dsa.a 00:08:37.860 SYMLINK libspdk_accel_error.so 00:08:37.860 SYMLINK libspdk_accel_ioat.so 00:08:37.860 SYMLINK libspdk_accel_iaa.so 00:08:37.860 SYMLINK libspdk_scheduler_dynamic.so 00:08:37.860 SO libspdk_blob_bdev.so.12.0 00:08:37.860 SO libspdk_accel_dsa.so.5.0 00:08:37.860 SYMLINK libspdk_blob_bdev.so 00:08:37.860 SYMLINK libspdk_accel_dsa.so 00:08:38.119 LIB libspdk_fsdev_aio.a 00:08:38.119 SO libspdk_fsdev_aio.so.1.0 00:08:38.119 LIB libspdk_sock_posix.a 00:08:38.119 SO libspdk_sock_posix.so.6.0 00:08:38.119 SYMLINK libspdk_fsdev_aio.so 00:08:38.378 SYMLINK libspdk_sock_posix.so 00:08:38.378 CC module/bdev/gpt/gpt.o 00:08:38.378 CC module/bdev/delay/vbdev_delay.o 00:08:38.378 CC module/bdev/gpt/vbdev_gpt.o 00:08:38.378 CC module/bdev/null/bdev_null.o 00:08:38.378 CC module/bdev/null/bdev_null_rpc.o 00:08:38.378 CC module/blobfs/bdev/blobfs_bdev.o 00:08:38.378 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:38.378 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:38.378 CC module/bdev/error/vbdev_error.o 00:08:38.378 CC module/bdev/malloc/bdev_malloc.o 00:08:38.378 CC module/bdev/nvme/bdev_nvme.o 00:08:38.378 CC module/bdev/error/vbdev_error_rpc.o 00:08:38.378 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:38.378 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:38.378 CC module/bdev/aio/bdev_aio.o 00:08:38.378 CC module/bdev/lvol/vbdev_lvol.o 00:08:38.378 CC module/bdev/nvme/nvme_rpc.o 00:08:38.378 CC module/bdev/aio/bdev_aio_rpc.o 00:08:38.378 CC module/bdev/raid/bdev_raid.o 00:08:38.378 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:38.378 CC module/bdev/raid/bdev_raid_rpc.o 00:08:38.378 CC module/bdev/nvme/bdev_mdns_client.o 00:08:38.378 CC module/bdev/split/vbdev_split.o 00:08:38.378 CC module/bdev/nvme/vbdev_opal.o 00:08:38.378 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:38.378 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:38.378 CC module/bdev/raid/bdev_raid_sb.o 00:08:38.378 CC module/bdev/raid/raid0.o 00:08:38.378 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:38.378 CC module/bdev/split/vbdev_split_rpc.o 00:08:38.378 CC module/bdev/raid/raid1.o 00:08:38.378 CC module/bdev/passthru/vbdev_passthru.o 00:08:38.378 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:38.378 CC module/bdev/iscsi/bdev_iscsi.o 00:08:38.378 CC module/bdev/raid/concat.o 00:08:38.378 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:38.378 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:38.378 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:38.378 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:38.378 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:38.378 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:38.378 CC module/bdev/ftl/bdev_ftl.o 00:08:38.636 LIB libspdk_bdev_null.a 00:08:38.636 LIB libspdk_blobfs_bdev.a 00:08:38.636 LIB libspdk_bdev_gpt.a 00:08:38.636 SO libspdk_bdev_null.so.6.0 00:08:38.636 SO libspdk_blobfs_bdev.so.6.0 00:08:38.636 LIB libspdk_bdev_split.a 00:08:38.636 SO libspdk_bdev_gpt.so.6.0 00:08:38.894 LIB libspdk_bdev_ftl.a 00:08:38.894 SO libspdk_bdev_split.so.6.0 00:08:38.894 LIB libspdk_bdev_error.a 00:08:38.894 SYMLINK libspdk_blobfs_bdev.so 00:08:38.894 LIB libspdk_bdev_malloc.a 00:08:38.894 SYMLINK libspdk_bdev_null.so 00:08:38.895 LIB libspdk_bdev_aio.a 00:08:38.895 SYMLINK libspdk_bdev_gpt.so 00:08:38.895 LIB libspdk_bdev_delay.a 00:08:38.895 SO libspdk_bdev_ftl.so.6.0 00:08:38.895 LIB libspdk_bdev_passthru.a 00:08:38.895 SO libspdk_bdev_error.so.6.0 00:08:38.895 SO libspdk_bdev_malloc.so.6.0 00:08:38.895 SO libspdk_bdev_aio.so.6.0 00:08:38.895 LIB libspdk_bdev_zone_block.a 00:08:38.895 SO libspdk_bdev_delay.so.6.0 00:08:38.895 SYMLINK libspdk_bdev_split.so 00:08:38.895 SO libspdk_bdev_passthru.so.6.0 00:08:38.895 LIB libspdk_bdev_iscsi.a 00:08:38.895 SYMLINK libspdk_bdev_ftl.so 00:08:38.895 SO libspdk_bdev_zone_block.so.6.0 00:08:38.895 SYMLINK libspdk_bdev_error.so 00:08:38.895 SYMLINK libspdk_bdev_malloc.so 00:08:38.895 SYMLINK libspdk_bdev_aio.so 00:08:38.895 SO libspdk_bdev_iscsi.so.6.0 00:08:38.895 SYMLINK libspdk_bdev_delay.so 00:08:38.895 SYMLINK libspdk_bdev_passthru.so 00:08:38.895 LIB libspdk_bdev_virtio.a 00:08:38.895 SYMLINK libspdk_bdev_zone_block.so 00:08:38.895 LIB libspdk_bdev_lvol.a 00:08:38.895 SYMLINK libspdk_bdev_iscsi.so 00:08:38.895 SO libspdk_bdev_virtio.so.6.0 00:08:38.895 SO libspdk_bdev_lvol.so.6.0 00:08:39.153 SYMLINK libspdk_bdev_virtio.so 00:08:39.153 SYMLINK libspdk_bdev_lvol.so 00:08:39.153 LIB libspdk_bdev_raid.a 00:08:39.413 SO libspdk_bdev_raid.so.6.0 00:08:39.413 SYMLINK libspdk_bdev_raid.so 00:08:40.351 LIB libspdk_bdev_nvme.a 00:08:40.351 SO libspdk_bdev_nvme.so.7.1 00:08:40.351 SYMLINK libspdk_bdev_nvme.so 00:08:40.918 CC module/event/subsystems/vmd/vmd.o 00:08:40.918 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:40.918 CC module/event/subsystems/iobuf/iobuf.o 00:08:40.918 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:40.918 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:40.918 CC module/event/subsystems/keyring/keyring.o 00:08:40.918 CC module/event/subsystems/scheduler/scheduler.o 00:08:40.918 CC module/event/subsystems/fsdev/fsdev.o 00:08:40.918 CC module/event/subsystems/sock/sock.o 00:08:41.177 LIB libspdk_event_iobuf.a 00:08:41.177 LIB libspdk_event_vhost_blk.a 00:08:41.177 LIB libspdk_event_sock.a 00:08:41.177 LIB libspdk_event_fsdev.a 00:08:41.177 LIB libspdk_event_keyring.a 00:08:41.177 LIB libspdk_event_scheduler.a 00:08:41.177 LIB libspdk_event_vmd.a 00:08:41.177 SO libspdk_event_fsdev.so.1.0 00:08:41.177 SO libspdk_event_vhost_blk.so.3.0 00:08:41.177 SO libspdk_event_sock.so.5.0 00:08:41.177 SO libspdk_event_iobuf.so.3.0 00:08:41.177 SO libspdk_event_keyring.so.1.0 00:08:41.177 SO libspdk_event_scheduler.so.4.0 00:08:41.177 SO libspdk_event_vmd.so.6.0 00:08:41.177 SYMLINK libspdk_event_vhost_blk.so 00:08:41.177 SYMLINK libspdk_event_fsdev.so 00:08:41.177 SYMLINK libspdk_event_sock.so 00:08:41.177 SYMLINK libspdk_event_keyring.so 00:08:41.177 SYMLINK libspdk_event_scheduler.so 00:08:41.177 SYMLINK libspdk_event_iobuf.so 00:08:41.177 SYMLINK libspdk_event_vmd.so 00:08:41.436 CC module/event/subsystems/accel/accel.o 00:08:41.695 LIB libspdk_event_accel.a 00:08:41.695 SO libspdk_event_accel.so.6.0 00:08:41.695 SYMLINK libspdk_event_accel.so 00:08:41.954 CC module/event/subsystems/bdev/bdev.o 00:08:42.213 LIB libspdk_event_bdev.a 00:08:42.213 SO libspdk_event_bdev.so.6.0 00:08:42.213 SYMLINK libspdk_event_bdev.so 00:08:42.472 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:42.473 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:42.473 CC module/event/subsystems/nbd/nbd.o 00:08:42.473 CC module/event/subsystems/scsi/scsi.o 00:08:42.473 CC module/event/subsystems/ublk/ublk.o 00:08:42.731 LIB libspdk_event_nbd.a 00:08:42.731 LIB libspdk_event_ublk.a 00:08:42.731 LIB libspdk_event_scsi.a 00:08:42.731 SO libspdk_event_nbd.so.6.0 00:08:42.731 SO libspdk_event_ublk.so.3.0 00:08:42.731 SO libspdk_event_scsi.so.6.0 00:08:42.731 LIB libspdk_event_nvmf.a 00:08:42.731 SYMLINK libspdk_event_ublk.so 00:08:42.731 SYMLINK libspdk_event_nbd.so 00:08:42.731 SO libspdk_event_nvmf.so.6.0 00:08:42.731 SYMLINK libspdk_event_scsi.so 00:08:42.990 SYMLINK libspdk_event_nvmf.so 00:08:43.250 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:43.250 CC module/event/subsystems/iscsi/iscsi.o 00:08:43.250 LIB libspdk_event_vhost_scsi.a 00:08:43.250 LIB libspdk_event_iscsi.a 00:08:43.250 SO libspdk_event_vhost_scsi.so.3.0 00:08:43.250 SO libspdk_event_iscsi.so.6.0 00:08:43.509 SYMLINK libspdk_event_vhost_scsi.so 00:08:43.509 SYMLINK libspdk_event_iscsi.so 00:08:43.509 SO libspdk.so.6.0 00:08:43.509 SYMLINK libspdk.so 00:08:44.085 CXX app/trace/trace.o 00:08:44.085 CC app/spdk_nvme_perf/perf.o 00:08:44.085 CC app/spdk_lspci/spdk_lspci.o 00:08:44.085 TEST_HEADER include/spdk/accel.h 00:08:44.085 TEST_HEADER include/spdk/accel_module.h 00:08:44.085 CC app/spdk_nvme_identify/identify.o 00:08:44.085 TEST_HEADER include/spdk/barrier.h 00:08:44.085 TEST_HEADER include/spdk/assert.h 00:08:44.085 CC app/trace_record/trace_record.o 00:08:44.085 CC app/spdk_top/spdk_top.o 00:08:44.085 TEST_HEADER include/spdk/bdev.h 00:08:44.085 TEST_HEADER include/spdk/base64.h 00:08:44.085 TEST_HEADER include/spdk/bdev_module.h 00:08:44.085 CC app/spdk_nvme_discover/discovery_aer.o 00:08:44.085 TEST_HEADER include/spdk/bdev_zone.h 00:08:44.085 CC test/rpc_client/rpc_client_test.o 00:08:44.085 TEST_HEADER include/spdk/bit_pool.h 00:08:44.085 TEST_HEADER include/spdk/bit_array.h 00:08:44.085 TEST_HEADER include/spdk/blob_bdev.h 00:08:44.085 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:44.085 TEST_HEADER include/spdk/blobfs.h 00:08:44.085 TEST_HEADER include/spdk/blob.h 00:08:44.085 TEST_HEADER include/spdk/conf.h 00:08:44.085 TEST_HEADER include/spdk/config.h 00:08:44.085 TEST_HEADER include/spdk/cpuset.h 00:08:44.085 TEST_HEADER include/spdk/crc16.h 00:08:44.085 TEST_HEADER include/spdk/crc32.h 00:08:44.085 TEST_HEADER include/spdk/dif.h 00:08:44.085 TEST_HEADER include/spdk/crc64.h 00:08:44.085 TEST_HEADER include/spdk/endian.h 00:08:44.085 TEST_HEADER include/spdk/dma.h 00:08:44.085 TEST_HEADER include/spdk/env_dpdk.h 00:08:44.085 TEST_HEADER include/spdk/event.h 00:08:44.085 TEST_HEADER include/spdk/env.h 00:08:44.085 TEST_HEADER include/spdk/fd_group.h 00:08:44.085 TEST_HEADER include/spdk/file.h 00:08:44.085 CC app/spdk_dd/spdk_dd.o 00:08:44.085 TEST_HEADER include/spdk/fsdev.h 00:08:44.085 TEST_HEADER include/spdk/fd.h 00:08:44.085 TEST_HEADER include/spdk/ftl.h 00:08:44.085 TEST_HEADER include/spdk/fsdev_module.h 00:08:44.085 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:44.085 TEST_HEADER include/spdk/gpt_spec.h 00:08:44.085 TEST_HEADER include/spdk/hexlify.h 00:08:44.085 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:44.085 TEST_HEADER include/spdk/idxd.h 00:08:44.085 TEST_HEADER include/spdk/idxd_spec.h 00:08:44.085 TEST_HEADER include/spdk/histogram_data.h 00:08:44.085 TEST_HEADER include/spdk/init.h 00:08:44.085 TEST_HEADER include/spdk/ioat_spec.h 00:08:44.085 CC app/iscsi_tgt/iscsi_tgt.o 00:08:44.085 TEST_HEADER include/spdk/ioat.h 00:08:44.085 TEST_HEADER include/spdk/iscsi_spec.h 00:08:44.085 TEST_HEADER include/spdk/json.h 00:08:44.085 TEST_HEADER include/spdk/jsonrpc.h 00:08:44.085 CC app/nvmf_tgt/nvmf_main.o 00:08:44.085 TEST_HEADER include/spdk/keyring_module.h 00:08:44.085 TEST_HEADER include/spdk/keyring.h 00:08:44.085 TEST_HEADER include/spdk/log.h 00:08:44.085 TEST_HEADER include/spdk/likely.h 00:08:44.085 TEST_HEADER include/spdk/lvol.h 00:08:44.085 TEST_HEADER include/spdk/memory.h 00:08:44.085 TEST_HEADER include/spdk/md5.h 00:08:44.085 TEST_HEADER include/spdk/nbd.h 00:08:44.085 TEST_HEADER include/spdk/net.h 00:08:44.085 TEST_HEADER include/spdk/mmio.h 00:08:44.085 TEST_HEADER include/spdk/nvme_intel.h 00:08:44.085 TEST_HEADER include/spdk/notify.h 00:08:44.085 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:44.085 TEST_HEADER include/spdk/nvme.h 00:08:44.085 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:44.085 TEST_HEADER include/spdk/nvme_spec.h 00:08:44.085 TEST_HEADER include/spdk/nvme_zns.h 00:08:44.085 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:44.085 TEST_HEADER include/spdk/nvmf.h 00:08:44.085 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:44.085 TEST_HEADER include/spdk/nvmf_spec.h 00:08:44.085 TEST_HEADER include/spdk/nvmf_transport.h 00:08:44.085 CC app/spdk_tgt/spdk_tgt.o 00:08:44.085 TEST_HEADER include/spdk/opal_spec.h 00:08:44.085 TEST_HEADER include/spdk/pci_ids.h 00:08:44.085 TEST_HEADER include/spdk/opal.h 00:08:44.085 TEST_HEADER include/spdk/queue.h 00:08:44.085 TEST_HEADER include/spdk/pipe.h 00:08:44.085 TEST_HEADER include/spdk/reduce.h 00:08:44.085 TEST_HEADER include/spdk/rpc.h 00:08:44.085 TEST_HEADER include/spdk/scsi.h 00:08:44.085 TEST_HEADER include/spdk/scsi_spec.h 00:08:44.085 TEST_HEADER include/spdk/scheduler.h 00:08:44.085 TEST_HEADER include/spdk/sock.h 00:08:44.085 TEST_HEADER include/spdk/stdinc.h 00:08:44.085 TEST_HEADER include/spdk/string.h 00:08:44.085 TEST_HEADER include/spdk/trace_parser.h 00:08:44.085 TEST_HEADER include/spdk/trace.h 00:08:44.085 TEST_HEADER include/spdk/tree.h 00:08:44.086 TEST_HEADER include/spdk/thread.h 00:08:44.086 TEST_HEADER include/spdk/ublk.h 00:08:44.086 TEST_HEADER include/spdk/util.h 00:08:44.086 TEST_HEADER include/spdk/uuid.h 00:08:44.086 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:44.086 TEST_HEADER include/spdk/version.h 00:08:44.086 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:44.086 TEST_HEADER include/spdk/vhost.h 00:08:44.086 TEST_HEADER include/spdk/vmd.h 00:08:44.086 TEST_HEADER include/spdk/xor.h 00:08:44.086 TEST_HEADER include/spdk/zipf.h 00:08:44.086 CXX test/cpp_headers/accel.o 00:08:44.086 CXX test/cpp_headers/accel_module.o 00:08:44.086 CXX test/cpp_headers/assert.o 00:08:44.086 CXX test/cpp_headers/barrier.o 00:08:44.086 CXX test/cpp_headers/base64.o 00:08:44.086 CXX test/cpp_headers/bdev.o 00:08:44.086 CXX test/cpp_headers/bdev_module.o 00:08:44.086 CXX test/cpp_headers/bdev_zone.o 00:08:44.086 CXX test/cpp_headers/bit_array.o 00:08:44.086 CXX test/cpp_headers/blobfs_bdev.o 00:08:44.086 CXX test/cpp_headers/bit_pool.o 00:08:44.086 CXX test/cpp_headers/blob_bdev.o 00:08:44.086 CXX test/cpp_headers/blob.o 00:08:44.086 CXX test/cpp_headers/blobfs.o 00:08:44.086 CXX test/cpp_headers/crc16.o 00:08:44.086 CXX test/cpp_headers/conf.o 00:08:44.086 CXX test/cpp_headers/crc32.o 00:08:44.086 CXX test/cpp_headers/config.o 00:08:44.086 CXX test/cpp_headers/cpuset.o 00:08:44.086 CXX test/cpp_headers/crc64.o 00:08:44.086 CXX test/cpp_headers/dif.o 00:08:44.086 CXX test/cpp_headers/env_dpdk.o 00:08:44.086 CXX test/cpp_headers/endian.o 00:08:44.086 CXX test/cpp_headers/env.o 00:08:44.086 CXX test/cpp_headers/dma.o 00:08:44.086 CXX test/cpp_headers/fd_group.o 00:08:44.086 CXX test/cpp_headers/event.o 00:08:44.086 CXX test/cpp_headers/file.o 00:08:44.086 CXX test/cpp_headers/fd.o 00:08:44.086 CXX test/cpp_headers/fsdev_module.o 00:08:44.086 CXX test/cpp_headers/fsdev.o 00:08:44.086 CXX test/cpp_headers/ftl.o 00:08:44.086 CXX test/cpp_headers/fuse_dispatcher.o 00:08:44.086 CXX test/cpp_headers/gpt_spec.o 00:08:44.086 CXX test/cpp_headers/hexlify.o 00:08:44.086 CXX test/cpp_headers/idxd_spec.o 00:08:44.086 CXX test/cpp_headers/idxd.o 00:08:44.086 CXX test/cpp_headers/histogram_data.o 00:08:44.086 CXX test/cpp_headers/init.o 00:08:44.086 CXX test/cpp_headers/ioat_spec.o 00:08:44.086 CXX test/cpp_headers/ioat.o 00:08:44.086 CXX test/cpp_headers/json.o 00:08:44.086 CXX test/cpp_headers/jsonrpc.o 00:08:44.086 CXX test/cpp_headers/iscsi_spec.o 00:08:44.086 CXX test/cpp_headers/keyring.o 00:08:44.086 CXX test/cpp_headers/keyring_module.o 00:08:44.086 CXX test/cpp_headers/likely.o 00:08:44.086 CXX test/cpp_headers/lvol.o 00:08:44.086 CXX test/cpp_headers/log.o 00:08:44.086 CXX test/cpp_headers/memory.o 00:08:44.086 CXX test/cpp_headers/md5.o 00:08:44.086 CXX test/cpp_headers/mmio.o 00:08:44.086 CXX test/cpp_headers/nbd.o 00:08:44.086 CXX test/cpp_headers/notify.o 00:08:44.086 CXX test/cpp_headers/net.o 00:08:44.086 CXX test/cpp_headers/nvme_ocssd.o 00:08:44.086 CXX test/cpp_headers/nvme_intel.o 00:08:44.086 CXX test/cpp_headers/nvme.o 00:08:44.086 CXX test/cpp_headers/nvme_spec.o 00:08:44.086 CXX test/cpp_headers/nvmf_cmd.o 00:08:44.086 CXX test/cpp_headers/nvme_zns.o 00:08:44.086 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:44.086 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:44.086 CXX test/cpp_headers/nvmf_spec.o 00:08:44.086 CXX test/cpp_headers/nvmf.o 00:08:44.086 CXX test/cpp_headers/nvmf_transport.o 00:08:44.086 CXX test/cpp_headers/opal.o 00:08:44.086 CXX test/cpp_headers/opal_spec.o 00:08:44.086 CXX test/cpp_headers/pci_ids.o 00:08:44.086 CXX test/cpp_headers/pipe.o 00:08:44.086 CXX test/cpp_headers/reduce.o 00:08:44.086 CXX test/cpp_headers/queue.o 00:08:44.086 CXX test/cpp_headers/rpc.o 00:08:44.086 CXX test/cpp_headers/scheduler.o 00:08:44.086 CXX test/cpp_headers/scsi.o 00:08:44.086 CXX test/cpp_headers/scsi_spec.o 00:08:44.086 CXX test/cpp_headers/sock.o 00:08:44.086 CXX test/cpp_headers/stdinc.o 00:08:44.086 CXX test/cpp_headers/string.o 00:08:44.086 CXX test/cpp_headers/thread.o 00:08:44.086 CC examples/ioat/verify/verify.o 00:08:44.086 CXX test/cpp_headers/trace.o 00:08:44.086 CXX test/cpp_headers/trace_parser.o 00:08:44.086 CXX test/cpp_headers/tree.o 00:08:44.086 CC examples/util/zipf/zipf.o 00:08:44.086 CC test/env/vtophys/vtophys.o 00:08:44.086 CC test/app/stub/stub.o 00:08:44.086 CC examples/ioat/perf/perf.o 00:08:44.086 CC test/env/pci/pci_ut.o 00:08:44.086 CXX test/cpp_headers/ublk.o 00:08:44.086 CC test/app/jsoncat/jsoncat.o 00:08:44.086 CC test/thread/poller_perf/poller_perf.o 00:08:44.086 CC test/env/memory/memory_ut.o 00:08:44.364 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:44.364 CC test/app/histogram_perf/histogram_perf.o 00:08:44.364 CC app/fio/nvme/fio_plugin.o 00:08:44.364 CXX test/cpp_headers/util.o 00:08:44.364 LINK spdk_lspci 00:08:44.364 CC app/fio/bdev/fio_plugin.o 00:08:44.364 CC test/app/bdev_svc/bdev_svc.o 00:08:44.364 CC test/dma/test_dma/test_dma.o 00:08:44.364 LINK nvmf_tgt 00:08:44.635 LINK rpc_client_test 00:08:44.635 CC test/env/mem_callbacks/mem_callbacks.o 00:08:44.635 LINK interrupt_tgt 00:08:44.635 LINK vtophys 00:08:44.635 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:44.635 LINK zipf 00:08:44.635 LINK spdk_nvme_discover 00:08:44.635 LINK spdk_tgt 00:08:44.635 CXX test/cpp_headers/uuid.o 00:08:44.635 CXX test/cpp_headers/version.o 00:08:44.635 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:44.635 CXX test/cpp_headers/vfio_user_pci.o 00:08:44.635 LINK poller_perf 00:08:44.635 CXX test/cpp_headers/vfio_user_spec.o 00:08:44.635 CXX test/cpp_headers/vhost.o 00:08:44.635 LINK histogram_perf 00:08:44.894 CXX test/cpp_headers/vmd.o 00:08:44.894 CXX test/cpp_headers/xor.o 00:08:44.894 CXX test/cpp_headers/zipf.o 00:08:44.894 LINK iscsi_tgt 00:08:44.894 LINK spdk_trace_record 00:08:44.894 LINK jsoncat 00:08:44.894 LINK ioat_perf 00:08:44.894 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:44.894 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:44.894 LINK env_dpdk_post_init 00:08:44.894 LINK verify 00:08:44.894 LINK spdk_trace 00:08:44.894 LINK stub 00:08:44.894 LINK bdev_svc 00:08:44.894 LINK spdk_dd 00:08:45.153 LINK pci_ut 00:08:45.153 LINK test_dma 00:08:45.153 LINK spdk_bdev 00:08:45.153 LINK spdk_nvme 00:08:45.153 LINK spdk_nvme_perf 00:08:45.153 LINK vhost_fuzz 00:08:45.153 LINK nvme_fuzz 00:08:45.153 CC examples/sock/hello_world/hello_sock.o 00:08:45.153 CC test/event/event_perf/event_perf.o 00:08:45.153 LINK spdk_top 00:08:45.153 CC test/event/reactor/reactor.o 00:08:45.153 CC test/event/reactor_perf/reactor_perf.o 00:08:45.153 CC app/vhost/vhost.o 00:08:45.153 CC examples/idxd/perf/perf.o 00:08:45.153 CC examples/vmd/led/led.o 00:08:45.153 CC test/event/app_repeat/app_repeat.o 00:08:45.153 CC examples/vmd/lsvmd/lsvmd.o 00:08:45.411 LINK mem_callbacks 00:08:45.411 CC test/event/scheduler/scheduler.o 00:08:45.411 CC examples/thread/thread/thread_ex.o 00:08:45.411 LINK spdk_nvme_identify 00:08:45.411 LINK reactor 00:08:45.411 LINK event_perf 00:08:45.411 LINK reactor_perf 00:08:45.411 LINK lsvmd 00:08:45.411 LINK vhost 00:08:45.411 LINK led 00:08:45.411 LINK hello_sock 00:08:45.411 LINK app_repeat 00:08:45.411 LINK thread 00:08:45.411 LINK scheduler 00:08:45.668 LINK idxd_perf 00:08:45.668 CC test/nvme/overhead/overhead.o 00:08:45.668 CC test/nvme/connect_stress/connect_stress.o 00:08:45.668 CC test/nvme/reset/reset.o 00:08:45.668 CC test/nvme/simple_copy/simple_copy.o 00:08:45.668 CC test/nvme/boot_partition/boot_partition.o 00:08:45.668 CC test/nvme/cuse/cuse.o 00:08:45.668 CC test/nvme/fdp/fdp.o 00:08:45.668 CC test/nvme/aer/aer.o 00:08:45.668 CC test/nvme/compliance/nvme_compliance.o 00:08:45.668 CC test/nvme/fused_ordering/fused_ordering.o 00:08:45.668 CC test/nvme/e2edp/nvme_dp.o 00:08:45.668 CC test/nvme/sgl/sgl.o 00:08:45.668 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:45.668 CC test/nvme/reserve/reserve.o 00:08:45.668 CC test/nvme/startup/startup.o 00:08:45.668 CC test/nvme/err_injection/err_injection.o 00:08:45.668 CC test/accel/dif/dif.o 00:08:45.668 CC test/blobfs/mkfs/mkfs.o 00:08:45.668 LINK memory_ut 00:08:45.668 CC test/lvol/esnap/esnap.o 00:08:45.668 LINK boot_partition 00:08:45.668 LINK connect_stress 00:08:45.668 LINK reserve 00:08:45.668 LINK err_injection 00:08:45.668 LINK startup 00:08:45.668 LINK doorbell_aers 00:08:45.668 LINK fused_ordering 00:08:45.668 LINK simple_copy 00:08:45.927 LINK sgl 00:08:45.927 LINK mkfs 00:08:45.927 LINK reset 00:08:45.927 LINK nvme_dp 00:08:45.927 LINK overhead 00:08:45.927 LINK aer 00:08:45.927 LINK nvme_compliance 00:08:45.927 CC examples/nvme/hotplug/hotplug.o 00:08:45.927 CC examples/nvme/hello_world/hello_world.o 00:08:45.927 LINK fdp 00:08:45.927 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:45.927 CC examples/nvme/reconnect/reconnect.o 00:08:45.927 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:45.927 CC examples/nvme/arbitration/arbitration.o 00:08:45.927 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:45.927 CC examples/nvme/abort/abort.o 00:08:45.927 CC examples/accel/perf/accel_perf.o 00:08:45.927 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:45.927 CC examples/blob/hello_world/hello_blob.o 00:08:45.927 CC examples/blob/cli/blobcli.o 00:08:45.927 LINK iscsi_fuzz 00:08:45.927 LINK hello_world 00:08:45.927 LINK pmr_persistence 00:08:45.927 LINK cmb_copy 00:08:46.185 LINK hotplug 00:08:46.185 LINK dif 00:08:46.185 LINK reconnect 00:08:46.185 LINK arbitration 00:08:46.185 LINK abort 00:08:46.185 LINK nvme_manage 00:08:46.185 LINK hello_blob 00:08:46.185 LINK hello_fsdev 00:08:46.443 LINK accel_perf 00:08:46.443 LINK blobcli 00:08:46.443 LINK cuse 00:08:46.701 CC test/bdev/bdevio/bdevio.o 00:08:46.701 CC examples/bdev/hello_world/hello_bdev.o 00:08:46.701 CC examples/bdev/bdevperf/bdevperf.o 00:08:46.960 LINK bdevio 00:08:46.960 LINK hello_bdev 00:08:47.586 LINK bdevperf 00:08:47.844 CC examples/nvmf/nvmf/nvmf.o 00:08:48.103 LINK nvmf 00:08:49.039 LINK esnap 00:08:49.298 00:08:49.298 real 0m50.303s 00:08:49.298 user 5m44.349s 00:08:49.298 sys 2m41.983s 00:08:49.298 13:39:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:49.298 13:39:48 make -- common/autotest_common.sh@10 -- $ set +x 00:08:49.298 ************************************ 00:08:49.298 END TEST make 00:08:49.298 ************************************ 00:08:49.298 13:39:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:49.298 13:39:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:49.298 13:39:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:49.298 13:39:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.298 13:39:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:49.298 13:39:48 -- pm/common@44 -- $ pid=1447901 00:08:49.298 13:39:48 -- pm/common@50 -- $ kill -TERM 1447901 00:08:49.298 13:39:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.298 13:39:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:49.298 13:39:48 -- pm/common@44 -- $ pid=1447902 00:08:49.298 13:39:48 -- pm/common@50 -- $ kill -TERM 1447902 00:08:49.298 13:39:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.298 13:39:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:49.298 13:39:48 -- pm/common@44 -- $ pid=1447904 00:08:49.298 13:39:48 -- pm/common@50 -- $ kill -TERM 1447904 00:08:49.298 13:39:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.298 13:39:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:49.298 13:39:48 -- pm/common@44 -- $ pid=1447930 00:08:49.298 13:39:48 -- pm/common@50 -- $ sudo -E kill -TERM 1447930 00:08:49.298 13:39:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:49.298 13:39:48 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:08:49.298 13:39:49 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.298 13:39:49 -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.298 13:39:49 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.298 13:39:49 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.298 13:39:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.298 13:39:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.298 13:39:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.298 13:39:49 -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.298 13:39:49 -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.298 13:39:49 -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.298 13:39:49 -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.298 13:39:49 -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.298 13:39:49 -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.298 13:39:49 -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.298 13:39:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.298 13:39:49 -- scripts/common.sh@344 -- # case "$op" in 00:08:49.298 13:39:49 -- scripts/common.sh@345 -- # : 1 00:08:49.298 13:39:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.298 13:39:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.298 13:39:49 -- scripts/common.sh@365 -- # decimal 1 00:08:49.558 13:39:49 -- scripts/common.sh@353 -- # local d=1 00:08:49.558 13:39:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.558 13:39:49 -- scripts/common.sh@355 -- # echo 1 00:08:49.558 13:39:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.558 13:39:49 -- scripts/common.sh@366 -- # decimal 2 00:08:49.558 13:39:49 -- scripts/common.sh@353 -- # local d=2 00:08:49.558 13:39:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.558 13:39:49 -- scripts/common.sh@355 -- # echo 2 00:08:49.558 13:39:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.558 13:39:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.558 13:39:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.558 13:39:49 -- scripts/common.sh@368 -- # return 0 00:08:49.558 13:39:49 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.558 13:39:49 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.558 --rc genhtml_branch_coverage=1 00:08:49.558 --rc genhtml_function_coverage=1 00:08:49.558 --rc genhtml_legend=1 00:08:49.558 --rc geninfo_all_blocks=1 00:08:49.558 --rc geninfo_unexecuted_blocks=1 00:08:49.558 00:08:49.558 ' 00:08:49.558 13:39:49 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.558 --rc genhtml_branch_coverage=1 00:08:49.558 --rc genhtml_function_coverage=1 00:08:49.558 --rc genhtml_legend=1 00:08:49.558 --rc geninfo_all_blocks=1 00:08:49.558 --rc geninfo_unexecuted_blocks=1 00:08:49.558 00:08:49.558 ' 00:08:49.558 13:39:49 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.558 --rc genhtml_branch_coverage=1 00:08:49.558 --rc genhtml_function_coverage=1 00:08:49.558 --rc genhtml_legend=1 00:08:49.558 --rc geninfo_all_blocks=1 00:08:49.558 --rc geninfo_unexecuted_blocks=1 00:08:49.558 00:08:49.558 ' 00:08:49.558 13:39:49 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.558 --rc genhtml_branch_coverage=1 00:08:49.558 --rc genhtml_function_coverage=1 00:08:49.558 --rc genhtml_legend=1 00:08:49.558 --rc geninfo_all_blocks=1 00:08:49.558 --rc geninfo_unexecuted_blocks=1 00:08:49.558 00:08:49.558 ' 00:08:49.558 13:39:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.558 13:39:49 -- nvmf/common.sh@7 -- # uname -s 00:08:49.558 13:39:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.558 13:39:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.558 13:39:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.558 13:39:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.558 13:39:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.558 13:39:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.558 13:39:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.558 13:39:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.558 13:39:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.558 13:39:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.558 13:39:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:49.558 13:39:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:49.558 13:39:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.558 13:39:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.558 13:39:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.558 13:39:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.558 13:39:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:49.558 13:39:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.558 13:39:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.558 13:39:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.558 13:39:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.558 13:39:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.558 13:39:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.558 13:39:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.558 13:39:49 -- paths/export.sh@5 -- # export PATH 00:08:49.558 13:39:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.558 13:39:49 -- nvmf/common.sh@51 -- # : 0 00:08:49.558 13:39:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.558 13:39:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.558 13:39:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.558 13:39:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.558 13:39:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.558 13:39:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.558 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.558 13:39:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.558 13:39:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.558 13:39:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.558 13:39:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:49.558 13:39:49 -- spdk/autotest.sh@32 -- # uname -s 00:08:49.558 13:39:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:49.558 13:39:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:49.558 13:39:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:08:49.558 13:39:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:08:49.558 13:39:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:08:49.558 13:39:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:49.558 13:39:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:49.558 13:39:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:49.558 13:39:49 -- spdk/autotest.sh@48 -- # udevadm_pid=1526867 00:08:49.558 13:39:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:49.558 13:39:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:49.558 13:39:49 -- pm/common@17 -- # local monitor 00:08:49.558 13:39:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.558 13:39:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.558 13:39:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.558 13:39:49 -- pm/common@21 -- # date +%s 00:08:49.558 13:39:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.558 13:39:49 -- pm/common@21 -- # date +%s 00:08:49.558 13:39:49 -- pm/common@25 -- # sleep 1 00:08:49.558 13:39:49 -- pm/common@21 -- # date +%s 00:08:49.558 13:39:49 -- pm/common@21 -- # date +%s 00:08:49.558 13:39:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402389 00:08:49.558 13:39:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402389 00:08:49.558 13:39:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402389 00:08:49.559 13:39:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402389 00:08:49.559 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402389_collect-cpu-load.pm.log 00:08:49.559 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402389_collect-vmstat.pm.log 00:08:49.559 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402389_collect-cpu-temp.pm.log 00:08:49.559 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402389_collect-bmc-pm.bmc.pm.log 00:08:50.496 13:39:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:50.496 13:39:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:50.496 13:39:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.496 13:39:50 -- common/autotest_common.sh@10 -- # set +x 00:08:50.496 13:39:50 -- spdk/autotest.sh@59 -- # create_test_list 00:08:50.496 13:39:50 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:50.496 13:39:50 -- common/autotest_common.sh@10 -- # set +x 00:08:50.496 13:39:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:08:50.496 13:39:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:50.496 13:39:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:50.496 13:39:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:08:50.496 13:39:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:50.496 13:39:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:50.496 13:39:50 -- common/autotest_common.sh@1457 -- # uname 00:08:50.496 13:39:50 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:50.496 13:39:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:50.496 13:39:50 -- common/autotest_common.sh@1477 -- # uname 00:08:50.496 13:39:50 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:50.496 13:39:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:50.496 13:39:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:50.754 lcov: LCOV version 1.15 00:08:50.754 13:39:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:09:02.959 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:02.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:09:13.024 13:40:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:13.024 13:40:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.024 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:09:13.024 13:40:12 -- spdk/autotest.sh@78 -- # rm -f 00:09:13.024 13:40:12 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:09:15.561 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:09:15.818 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:09:15.818 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:09:15.818 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:09:15.818 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:09:15.818 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:09:15.819 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:09:15.819 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:09:15.819 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:09:15.819 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:09:15.819 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:09:15.819 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:09:15.819 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:09:16.076 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:09:16.076 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:09:16.076 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:09:16.076 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:09:17.451 13:40:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:17.451 13:40:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:17.451 13:40:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:17.451 13:40:17 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:17.451 13:40:17 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:17.451 13:40:17 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:17.451 13:40:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:17.451 13:40:17 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:09:17.451 13:40:17 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:17.451 13:40:17 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:17.451 13:40:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:17.451 13:40:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:17.451 13:40:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:17.451 13:40:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:17.451 13:40:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:17.451 13:40:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:17.451 13:40:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:17.451 13:40:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:17.451 13:40:17 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:17.451 No valid GPT data, bailing 00:09:17.451 13:40:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:17.451 13:40:17 -- scripts/common.sh@394 -- # pt= 00:09:17.451 13:40:17 -- scripts/common.sh@395 -- # return 1 00:09:17.451 13:40:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:17.451 1+0 records in 00:09:17.451 1+0 records out 00:09:17.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595322 s, 176 MB/s 00:09:17.451 13:40:17 -- spdk/autotest.sh@105 -- # sync 00:09:17.451 13:40:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:17.451 13:40:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:17.451 13:40:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:24.020 13:40:23 -- spdk/autotest.sh@111 -- # uname -s 00:09:24.020 13:40:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:24.020 13:40:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:24.020 13:40:23 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:09:26.552 Hugepages 00:09:26.552 node hugesize free / total 00:09:26.552 node0 1048576kB 0 / 0 00:09:26.552 node0 2048kB 0 / 0 00:09:26.552 node1 1048576kB 0 / 0 00:09:26.552 node1 2048kB 0 / 0 00:09:26.552 00:09:26.552 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:26.552 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:09:26.552 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:09:26.552 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:09:26.552 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:09:26.552 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:09:26.552 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:09:26.552 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:09:26.552 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:09:26.552 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:09:26.552 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:09:26.552 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:09:26.552 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:09:26.552 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:09:26.552 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:09:26.552 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:09:26.552 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:09:26.810 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:09:26.810 13:40:26 -- spdk/autotest.sh@117 -- # uname -s 00:09:26.810 13:40:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:26.810 13:40:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:26.810 13:40:26 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:09:30.095 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:30.095 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:33.382 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:09:34.756 13:40:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:35.694 13:40:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:35.694 13:40:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:35.694 13:40:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:35.694 13:40:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:35.694 13:40:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:35.694 13:40:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:35.694 13:40:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:35.694 13:40:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:35.694 13:40:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:35.694 13:40:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:35.694 13:40:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:09:35.694 13:40:35 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:09:38.981 Waiting for block devices as requested 00:09:38.981 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:38.981 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:38.981 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:38.981 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:38.981 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:38.981 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:38.981 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:38.981 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:39.240 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:39.240 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:39.240 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:39.240 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:39.498 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:39.498 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:39.498 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:39.757 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:39.757 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:09:41.133 13:40:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:41.133 13:40:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:09:41.133 13:40:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:09:41.133 13:40:40 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:09:41.133 13:40:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:09:41.133 13:40:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:09:41.133 13:40:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:09:41.133 13:40:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:41.133 13:40:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:41.133 13:40:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:41.133 13:40:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:41.133 13:40:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:41.133 13:40:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:41.133 13:40:40 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:09:41.133 13:40:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:41.133 13:40:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:41.133 13:40:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:41.133 13:40:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:41.133 13:40:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:41.133 13:40:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:41.133 13:40:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:41.133 13:40:40 -- common/autotest_common.sh@1543 -- # continue 00:09:41.133 13:40:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:41.133 13:40:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.133 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.393 13:40:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:41.393 13:40:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.393 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.393 13:40:41 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:09:44.683 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:44.683 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:47.971 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:09:49.346 13:40:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:49.346 13:40:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:49.346 13:40:48 -- common/autotest_common.sh@10 -- # set +x 00:09:49.346 13:40:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:49.346 13:40:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:49.346 13:40:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:49.346 13:40:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:49.346 13:40:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:49.346 13:40:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:49.346 13:40:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:49.346 13:40:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:49.346 13:40:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:49.346 13:40:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:49.346 13:40:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:49.346 13:40:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:49.346 13:40:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:49.346 13:40:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:49.346 13:40:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:09:49.346 13:40:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:49.346 13:40:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:09:49.346 13:40:48 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:09:49.346 13:40:48 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:09:49.346 13:40:48 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:09:49.346 13:40:48 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:09:49.346 13:40:48 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:09:49.346 13:40:48 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:09:49.346 13:40:48 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1543122 00:09:49.346 13:40:48 -- common/autotest_common.sh@1585 -- # waitforlisten 1543122 00:09:49.346 13:40:48 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:09:49.346 13:40:48 -- common/autotest_common.sh@835 -- # '[' -z 1543122 ']' 00:09:49.346 13:40:48 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.346 13:40:48 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.346 13:40:48 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.346 13:40:48 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.346 13:40:48 -- common/autotest_common.sh@10 -- # set +x 00:09:49.346 [2024-12-05 13:40:49.009823] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:09:49.346 [2024-12-05 13:40:49.009867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543122 ] 00:09:49.346 [2024-12-05 13:40:49.083496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.346 [2024-12-05 13:40:49.105317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.605 13:40:49 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.605 13:40:49 -- common/autotest_common.sh@868 -- # return 0 00:09:49.605 13:40:49 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:09:49.605 13:40:49 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:09:49.605 13:40:49 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:09:52.893 nvme0n1 00:09:52.893 13:40:52 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:09:52.893 [2024-12-05 13:40:52.462185] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:09:52.893 request: 00:09:52.893 { 00:09:52.893 "nvme_ctrlr_name": "nvme0", 00:09:52.893 "password": "test", 00:09:52.893 "method": "bdev_nvme_opal_revert", 00:09:52.893 "req_id": 1 00:09:52.893 } 00:09:52.893 Got JSON-RPC error response 00:09:52.893 response: 00:09:52.893 { 00:09:52.893 "code": -32602, 00:09:52.893 "message": "Invalid parameters" 00:09:52.893 } 00:09:52.893 13:40:52 -- common/autotest_common.sh@1591 -- # true 00:09:52.893 13:40:52 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:09:52.893 13:40:52 -- common/autotest_common.sh@1595 -- # killprocess 1543122 00:09:52.893 13:40:52 -- common/autotest_common.sh@954 -- # '[' -z 1543122 ']' 00:09:52.893 13:40:52 -- common/autotest_common.sh@958 -- # kill -0 1543122 00:09:52.893 13:40:52 -- common/autotest_common.sh@959 -- # uname 00:09:52.893 13:40:52 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.893 13:40:52 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1543122 00:09:52.893 13:40:52 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.893 13:40:52 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.893 13:40:52 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1543122' 00:09:52.893 killing process with pid 1543122 00:09:52.893 13:40:52 -- common/autotest_common.sh@973 -- # kill 1543122 00:09:52.893 13:40:52 -- common/autotest_common.sh@978 -- # wait 1543122 00:09:57.081 13:40:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:57.081 13:40:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:57.081 13:40:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:57.081 13:40:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:57.081 13:40:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:57.081 13:40:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.081 13:40:56 -- common/autotest_common.sh@10 -- # set +x 00:09:57.081 13:40:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:57.081 13:40:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:09:57.081 13:40:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.081 13:40:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.081 13:40:56 -- common/autotest_common.sh@10 -- # set +x 00:09:57.081 ************************************ 00:09:57.081 START TEST env 00:09:57.081 ************************************ 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:09:57.081 * Looking for test storage... 00:09:57.081 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.081 13:40:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.081 13:40:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.081 13:40:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.081 13:40:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.081 13:40:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.081 13:40:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.081 13:40:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.081 13:40:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.081 13:40:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.081 13:40:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.081 13:40:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.081 13:40:56 env -- scripts/common.sh@344 -- # case "$op" in 00:09:57.081 13:40:56 env -- scripts/common.sh@345 -- # : 1 00:09:57.081 13:40:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.081 13:40:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.081 13:40:56 env -- scripts/common.sh@365 -- # decimal 1 00:09:57.081 13:40:56 env -- scripts/common.sh@353 -- # local d=1 00:09:57.081 13:40:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.081 13:40:56 env -- scripts/common.sh@355 -- # echo 1 00:09:57.081 13:40:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.081 13:40:56 env -- scripts/common.sh@366 -- # decimal 2 00:09:57.081 13:40:56 env -- scripts/common.sh@353 -- # local d=2 00:09:57.081 13:40:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.081 13:40:56 env -- scripts/common.sh@355 -- # echo 2 00:09:57.081 13:40:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.081 13:40:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.081 13:40:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.081 13:40:56 env -- scripts/common.sh@368 -- # return 0 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.081 --rc genhtml_branch_coverage=1 00:09:57.081 --rc genhtml_function_coverage=1 00:09:57.081 --rc genhtml_legend=1 00:09:57.081 --rc geninfo_all_blocks=1 00:09:57.081 --rc geninfo_unexecuted_blocks=1 00:09:57.081 00:09:57.081 ' 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.081 --rc genhtml_branch_coverage=1 00:09:57.081 --rc genhtml_function_coverage=1 00:09:57.081 --rc genhtml_legend=1 00:09:57.081 --rc geninfo_all_blocks=1 00:09:57.081 --rc geninfo_unexecuted_blocks=1 00:09:57.081 00:09:57.081 ' 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.081 --rc genhtml_branch_coverage=1 00:09:57.081 --rc genhtml_function_coverage=1 00:09:57.081 --rc genhtml_legend=1 00:09:57.081 --rc geninfo_all_blocks=1 00:09:57.081 --rc geninfo_unexecuted_blocks=1 00:09:57.081 00:09:57.081 ' 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.081 --rc genhtml_branch_coverage=1 00:09:57.081 --rc genhtml_function_coverage=1 00:09:57.081 --rc genhtml_legend=1 00:09:57.081 --rc geninfo_all_blocks=1 00:09:57.081 --rc geninfo_unexecuted_blocks=1 00:09:57.081 00:09:57.081 ' 00:09:57.081 13:40:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.081 13:40:56 env -- common/autotest_common.sh@10 -- # set +x 00:09:57.081 ************************************ 00:09:57.081 START TEST env_memory 00:09:57.081 ************************************ 00:09:57.081 13:40:56 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:09:57.081 00:09:57.081 00:09:57.081 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.081 http://cunit.sourceforge.net/ 00:09:57.081 00:09:57.081 00:09:57.081 Suite: memory 00:09:57.081 Test: alloc and free memory map ...[2024-12-05 13:40:56.721015] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:57.081 passed 00:09:57.081 Test: mem map translation ...[2024-12-05 13:40:56.738183] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:57.081 [2024-12-05 13:40:56.738204] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:57.081 [2024-12-05 13:40:56.738235] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:57.081 [2024-12-05 13:40:56.738241] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:57.081 passed 00:09:57.081 Test: mem map registration ...[2024-12-05 13:40:56.771892] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:57.081 [2024-12-05 13:40:56.771908] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:57.081 passed 00:09:57.081 Test: mem map adjacent registrations ...passed 00:09:57.081 00:09:57.081 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.081 suites 1 1 n/a 0 0 00:09:57.081 tests 4 4 4 0 0 00:09:57.081 asserts 152 152 152 0 n/a 00:09:57.081 00:09:57.081 Elapsed time = 0.128 seconds 00:09:57.081 00:09:57.081 real 0m0.140s 00:09:57.081 user 0m0.131s 00:09:57.081 sys 0m0.008s 00:09:57.081 13:40:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.081 13:40:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:57.081 ************************************ 00:09:57.081 END TEST env_memory 00:09:57.081 ************************************ 00:09:57.081 13:40:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.081 13:40:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.081 13:40:56 env -- common/autotest_common.sh@10 -- # set +x 00:09:57.081 ************************************ 00:09:57.081 START TEST env_vtophys 00:09:57.081 ************************************ 00:09:57.081 13:40:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:57.081 EAL: lib.eal log level changed from notice to debug 00:09:57.081 EAL: Detected lcore 0 as core 0 on socket 0 00:09:57.081 EAL: Detected lcore 1 as core 1 on socket 0 00:09:57.081 EAL: Detected lcore 2 as core 2 on socket 0 00:09:57.081 EAL: Detected lcore 3 as core 3 on socket 0 00:09:57.081 EAL: Detected lcore 4 as core 4 on socket 0 00:09:57.081 EAL: Detected lcore 5 as core 5 on socket 0 00:09:57.081 EAL: Detected lcore 6 as core 6 on socket 0 00:09:57.081 EAL: Detected lcore 7 as core 8 on socket 0 00:09:57.081 EAL: Detected lcore 8 as core 9 on socket 0 00:09:57.081 EAL: Detected lcore 9 as core 10 on socket 0 00:09:57.081 EAL: Detected lcore 10 as core 11 on socket 0 00:09:57.081 EAL: Detected lcore 11 as core 12 on socket 0 00:09:57.081 EAL: Detected lcore 12 as core 13 on socket 0 00:09:57.081 EAL: Detected lcore 13 as core 14 on socket 0 00:09:57.081 EAL: Detected lcore 14 as core 16 on socket 0 00:09:57.081 EAL: Detected lcore 15 as core 17 on socket 0 00:09:57.081 EAL: Detected lcore 16 as core 18 on socket 0 00:09:57.081 EAL: Detected lcore 17 as core 19 on socket 0 00:09:57.081 EAL: Detected lcore 18 as core 20 on socket 0 00:09:57.081 EAL: Detected lcore 19 as core 21 on socket 0 00:09:57.081 EAL: Detected lcore 20 as core 22 on socket 0 00:09:57.081 EAL: Detected lcore 21 as core 24 on socket 0 00:09:57.081 EAL: Detected lcore 22 as core 25 on socket 0 00:09:57.081 EAL: Detected lcore 23 as core 26 on socket 0 00:09:57.081 EAL: Detected lcore 24 as core 27 on socket 0 00:09:57.081 EAL: Detected lcore 25 as core 28 on socket 0 00:09:57.081 EAL: Detected lcore 26 as core 29 on socket 0 00:09:57.081 EAL: Detected lcore 27 as core 30 on socket 0 00:09:57.081 EAL: Detected lcore 28 as core 0 on socket 1 00:09:57.081 EAL: Detected lcore 29 as core 1 on socket 1 00:09:57.081 EAL: Detected lcore 30 as core 2 on socket 1 00:09:57.081 EAL: Detected lcore 31 as core 3 on socket 1 00:09:57.081 EAL: Detected lcore 32 as core 4 on socket 1 00:09:57.081 EAL: Detected lcore 33 as core 5 on socket 1 00:09:57.081 EAL: Detected lcore 34 as core 6 on socket 1 00:09:57.081 EAL: Detected lcore 35 as core 8 on socket 1 00:09:57.081 EAL: Detected lcore 36 as core 9 on socket 1 00:09:57.081 EAL: Detected lcore 37 as core 10 on socket 1 00:09:57.081 EAL: Detected lcore 38 as core 11 on socket 1 00:09:57.081 EAL: Detected lcore 39 as core 12 on socket 1 00:09:57.081 EAL: Detected lcore 40 as core 13 on socket 1 00:09:57.081 EAL: Detected lcore 41 as core 14 on socket 1 00:09:57.081 EAL: Detected lcore 42 as core 16 on socket 1 00:09:57.082 EAL: Detected lcore 43 as core 17 on socket 1 00:09:57.082 EAL: Detected lcore 44 as core 18 on socket 1 00:09:57.082 EAL: Detected lcore 45 as core 19 on socket 1 00:09:57.082 EAL: Detected lcore 46 as core 20 on socket 1 00:09:57.082 EAL: Detected lcore 47 as core 21 on socket 1 00:09:57.082 EAL: Detected lcore 48 as core 22 on socket 1 00:09:57.082 EAL: Detected lcore 49 as core 24 on socket 1 00:09:57.082 EAL: Detected lcore 50 as core 25 on socket 1 00:09:57.082 EAL: Detected lcore 51 as core 26 on socket 1 00:09:57.082 EAL: Detected lcore 52 as core 27 on socket 1 00:09:57.082 EAL: Detected lcore 53 as core 28 on socket 1 00:09:57.082 EAL: Detected lcore 54 as core 29 on socket 1 00:09:57.082 EAL: Detected lcore 55 as core 30 on socket 1 00:09:57.082 EAL: Detected lcore 56 as core 0 on socket 0 00:09:57.082 EAL: Detected lcore 57 as core 1 on socket 0 00:09:57.082 EAL: Detected lcore 58 as core 2 on socket 0 00:09:57.082 EAL: Detected lcore 59 as core 3 on socket 0 00:09:57.082 EAL: Detected lcore 60 as core 4 on socket 0 00:09:57.082 EAL: Detected lcore 61 as core 5 on socket 0 00:09:57.082 EAL: Detected lcore 62 as core 6 on socket 0 00:09:57.082 EAL: Detected lcore 63 as core 8 on socket 0 00:09:57.082 EAL: Detected lcore 64 as core 9 on socket 0 00:09:57.082 EAL: Detected lcore 65 as core 10 on socket 0 00:09:57.082 EAL: Detected lcore 66 as core 11 on socket 0 00:09:57.082 EAL: Detected lcore 67 as core 12 on socket 0 00:09:57.082 EAL: Detected lcore 68 as core 13 on socket 0 00:09:57.082 EAL: Detected lcore 69 as core 14 on socket 0 00:09:57.082 EAL: Detected lcore 70 as core 16 on socket 0 00:09:57.082 EAL: Detected lcore 71 as core 17 on socket 0 00:09:57.082 EAL: Detected lcore 72 as core 18 on socket 0 00:09:57.082 EAL: Detected lcore 73 as core 19 on socket 0 00:09:57.082 EAL: Detected lcore 74 as core 20 on socket 0 00:09:57.082 EAL: Detected lcore 75 as core 21 on socket 0 00:09:57.082 EAL: Detected lcore 76 as core 22 on socket 0 00:09:57.082 EAL: Detected lcore 77 as core 24 on socket 0 00:09:57.082 EAL: Detected lcore 78 as core 25 on socket 0 00:09:57.082 EAL: Detected lcore 79 as core 26 on socket 0 00:09:57.082 EAL: Detected lcore 80 as core 27 on socket 0 00:09:57.082 EAL: Detected lcore 81 as core 28 on socket 0 00:09:57.082 EAL: Detected lcore 82 as core 29 on socket 0 00:09:57.082 EAL: Detected lcore 83 as core 30 on socket 0 00:09:57.082 EAL: Detected lcore 84 as core 0 on socket 1 00:09:57.082 EAL: Detected lcore 85 as core 1 on socket 1 00:09:57.082 EAL: Detected lcore 86 as core 2 on socket 1 00:09:57.082 EAL: Detected lcore 87 as core 3 on socket 1 00:09:57.082 EAL: Detected lcore 88 as core 4 on socket 1 00:09:57.082 EAL: Detected lcore 89 as core 5 on socket 1 00:09:57.082 EAL: Detected lcore 90 as core 6 on socket 1 00:09:57.082 EAL: Detected lcore 91 as core 8 on socket 1 00:09:57.082 EAL: Detected lcore 92 as core 9 on socket 1 00:09:57.082 EAL: Detected lcore 93 as core 10 on socket 1 00:09:57.082 EAL: Detected lcore 94 as core 11 on socket 1 00:09:57.082 EAL: Detected lcore 95 as core 12 on socket 1 00:09:57.082 EAL: Detected lcore 96 as core 13 on socket 1 00:09:57.082 EAL: Detected lcore 97 as core 14 on socket 1 00:09:57.082 EAL: Detected lcore 98 as core 16 on socket 1 00:09:57.082 EAL: Detected lcore 99 as core 17 on socket 1 00:09:57.082 EAL: Detected lcore 100 as core 18 on socket 1 00:09:57.082 EAL: Detected lcore 101 as core 19 on socket 1 00:09:57.082 EAL: Detected lcore 102 as core 20 on socket 1 00:09:57.082 EAL: Detected lcore 103 as core 21 on socket 1 00:09:57.082 EAL: Detected lcore 104 as core 22 on socket 1 00:09:57.082 EAL: Detected lcore 105 as core 24 on socket 1 00:09:57.082 EAL: Detected lcore 106 as core 25 on socket 1 00:09:57.082 EAL: Detected lcore 107 as core 26 on socket 1 00:09:57.082 EAL: Detected lcore 108 as core 27 on socket 1 00:09:57.082 EAL: Detected lcore 109 as core 28 on socket 1 00:09:57.082 EAL: Detected lcore 110 as core 29 on socket 1 00:09:57.082 EAL: Detected lcore 111 as core 30 on socket 1 00:09:57.082 EAL: Maximum logical cores by configuration: 128 00:09:57.082 EAL: Detected CPU lcores: 112 00:09:57.082 EAL: Detected NUMA nodes: 2 00:09:57.082 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:09:57.082 EAL: Detected shared linkage of DPDK 00:09:57.082 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:09:57.082 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:09:57.082 EAL: Registered [vdev] bus. 00:09:57.082 EAL: bus.vdev log level changed from disabled to notice 00:09:57.082 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:09:57.082 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:09:57.082 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:09:57.082 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:09:57.082 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:09:57.082 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:09:57.082 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:09:57.082 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:09:57.082 EAL: No shared files mode enabled, IPC will be disabled 00:09:57.082 EAL: No shared files mode enabled, IPC is disabled 00:09:57.082 EAL: Bus pci wants IOVA as 'DC' 00:09:57.082 EAL: Bus vdev wants IOVA as 'DC' 00:09:57.082 EAL: Buses did not request a specific IOVA mode. 00:09:57.082 EAL: IOMMU is available, selecting IOVA as VA mode. 00:09:57.082 EAL: Selected IOVA mode 'VA' 00:09:57.082 EAL: Probing VFIO support... 00:09:57.082 EAL: IOMMU type 1 (Type 1) is supported 00:09:57.082 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:57.082 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:57.082 EAL: VFIO support initialized 00:09:57.082 EAL: Ask a virtual area of 0x2e000 bytes 00:09:57.082 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:57.082 EAL: Setting up physically contiguous memory... 00:09:57.082 EAL: Setting maximum number of open files to 524288 00:09:57.082 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:57.342 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:09:57.342 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:57.342 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.342 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:57.342 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:57.342 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.342 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:57.342 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:57.342 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.342 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:57.342 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:57.342 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.342 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:57.342 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:57.342 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.342 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:57.342 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:57.342 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.342 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:57.342 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:57.342 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.342 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:57.342 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:57.342 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.342 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:57.342 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:57.342 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:09:57.342 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.342 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:09:57.342 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:57.342 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.342 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:09:57.342 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:09:57.342 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.342 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:09:57.342 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:57.342 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.342 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:09:57.342 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:09:57.342 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.342 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:09:57.342 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:57.342 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.342 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:09:57.342 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:09:57.342 EAL: Ask a virtual area of 0x61000 bytes 00:09:57.342 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:09:57.342 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:57.342 EAL: Ask a virtual area of 0x400000000 bytes 00:09:57.342 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:09:57.342 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:09:57.342 EAL: Hugepages will be freed exactly as allocated. 00:09:57.342 EAL: No shared files mode enabled, IPC is disabled 00:09:57.342 EAL: No shared files mode enabled, IPC is disabled 00:09:57.342 EAL: TSC frequency is ~2700000 KHz 00:09:57.342 EAL: Main lcore 0 is ready (tid=7fa81c7c2a00;cpuset=[0]) 00:09:57.343 EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 0 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 2MB 00:09:57.343 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:09:57.343 EAL: probe driver: 8086:37d2 net_i40e 00:09:57.343 EAL: Not managed by a supported kernel driver, skipped 00:09:57.343 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:09:57.343 EAL: probe driver: 8086:37d2 net_i40e 00:09:57.343 EAL: Not managed by a supported kernel driver, skipped 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:57.343 EAL: Mem event callback 'spdk:(nil)' registered 00:09:57.343 00:09:57.343 00:09:57.343 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.343 http://cunit.sourceforge.net/ 00:09:57.343 00:09:57.343 00:09:57.343 Suite: components_suite 00:09:57.343 Test: vtophys_malloc_test ...passed 00:09:57.343 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 4 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 4MB 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was shrunk by 4MB 00:09:57.343 EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 4 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 6MB 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was shrunk by 6MB 00:09:57.343 EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 4 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 10MB 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was shrunk by 10MB 00:09:57.343 EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 4 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 18MB 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was shrunk by 18MB 00:09:57.343 EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 4 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 34MB 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was shrunk by 34MB 00:09:57.343 EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 4 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 66MB 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was shrunk by 66MB 00:09:57.343 EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 4 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 130MB 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was shrunk by 130MB 00:09:57.343 EAL: Trying to obtain current memory policy. 00:09:57.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.343 EAL: Restoring previous memory policy: 4 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.343 EAL: request: mp_malloc_sync 00:09:57.343 EAL: No shared files mode enabled, IPC is disabled 00:09:57.343 EAL: Heap on socket 0 was expanded by 258MB 00:09:57.343 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.602 EAL: request: mp_malloc_sync 00:09:57.602 EAL: No shared files mode enabled, IPC is disabled 00:09:57.602 EAL: Heap on socket 0 was shrunk by 258MB 00:09:57.602 EAL: Trying to obtain current memory policy. 00:09:57.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.602 EAL: Restoring previous memory policy: 4 00:09:57.602 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.602 EAL: request: mp_malloc_sync 00:09:57.602 EAL: No shared files mode enabled, IPC is disabled 00:09:57.603 EAL: Heap on socket 0 was expanded by 514MB 00:09:57.603 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.861 EAL: request: mp_malloc_sync 00:09:57.861 EAL: No shared files mode enabled, IPC is disabled 00:09:57.861 EAL: Heap on socket 0 was shrunk by 514MB 00:09:57.861 EAL: Trying to obtain current memory policy. 00:09:57.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.861 EAL: Restoring previous memory policy: 4 00:09:57.861 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.861 EAL: request: mp_malloc_sync 00:09:57.861 EAL: No shared files mode enabled, IPC is disabled 00:09:57.861 EAL: Heap on socket 0 was expanded by 1026MB 00:09:58.120 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.379 EAL: request: mp_malloc_sync 00:09:58.379 EAL: No shared files mode enabled, IPC is disabled 00:09:58.379 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:58.379 passed 00:09:58.379 00:09:58.379 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.379 suites 1 1 n/a 0 0 00:09:58.379 tests 2 2 2 0 0 00:09:58.379 asserts 497 497 497 0 n/a 00:09:58.379 00:09:58.379 Elapsed time = 0.970 seconds 00:09:58.379 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.379 EAL: request: mp_malloc_sync 00:09:58.380 EAL: No shared files mode enabled, IPC is disabled 00:09:58.380 EAL: Heap on socket 0 was shrunk by 2MB 00:09:58.380 EAL: No shared files mode enabled, IPC is disabled 00:09:58.380 EAL: No shared files mode enabled, IPC is disabled 00:09:58.380 EAL: No shared files mode enabled, IPC is disabled 00:09:58.380 00:09:58.380 real 0m1.095s 00:09:58.380 user 0m0.635s 00:09:58.380 sys 0m0.436s 00:09:58.380 13:40:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.380 13:40:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:58.380 ************************************ 00:09:58.380 END TEST env_vtophys 00:09:58.380 ************************************ 00:09:58.380 13:40:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:09:58.380 13:40:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.380 13:40:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.380 13:40:58 env -- common/autotest_common.sh@10 -- # set +x 00:09:58.380 ************************************ 00:09:58.380 START TEST env_pci 00:09:58.380 ************************************ 00:09:58.380 13:40:58 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:09:58.380 00:09:58.380 00:09:58.380 CUnit - A unit testing framework for C - Version 2.1-3 00:09:58.380 http://cunit.sourceforge.net/ 00:09:58.380 00:09:58.380 00:09:58.380 Suite: pci 00:09:58.380 Test: pci_hook ...[2024-12-05 13:40:58.068575] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1544976 has claimed it 00:09:58.380 EAL: Cannot find device (10000:00:01.0) 00:09:58.380 EAL: Failed to attach device on primary process 00:09:58.380 passed 00:09:58.380 00:09:58.380 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.380 suites 1 1 n/a 0 0 00:09:58.380 tests 1 1 1 0 0 00:09:58.380 asserts 25 25 25 0 n/a 00:09:58.380 00:09:58.380 Elapsed time = 0.029 seconds 00:09:58.380 00:09:58.380 real 0m0.048s 00:09:58.380 user 0m0.011s 00:09:58.380 sys 0m0.036s 00:09:58.380 13:40:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.380 13:40:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:58.380 ************************************ 00:09:58.380 END TEST env_pci 00:09:58.380 ************************************ 00:09:58.380 13:40:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:58.380 13:40:58 env -- env/env.sh@15 -- # uname 00:09:58.380 13:40:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:58.380 13:40:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:58.380 13:40:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:58.380 13:40:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:58.380 13:40:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.380 13:40:58 env -- common/autotest_common.sh@10 -- # set +x 00:09:58.380 ************************************ 00:09:58.380 START TEST env_dpdk_post_init 00:09:58.380 ************************************ 00:09:58.380 13:40:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:58.380 EAL: Detected CPU lcores: 112 00:09:58.380 EAL: Detected NUMA nodes: 2 00:09:58.380 EAL: Detected shared linkage of DPDK 00:09:58.380 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:58.380 EAL: Selected IOVA mode 'VA' 00:09:58.380 EAL: VFIO support initialized 00:09:58.639 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:58.639 EAL: Using IOMMU type 1 (Type 1) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:09:58.639 EAL: Ignore mapping IO port bar(1) 00:09:58.639 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:09:58.897 EAL: Ignore mapping IO port bar(1) 00:09:58.897 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:09:59.464 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:10:04.733 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:10:04.733 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:10:05.301 Starting DPDK initialization... 00:10:05.301 Starting SPDK post initialization... 00:10:05.301 SPDK NVMe probe 00:10:05.301 Attaching to 0000:d8:00.0 00:10:05.301 Attached to 0000:d8:00.0 00:10:05.301 Cleaning up... 00:10:05.301 00:10:05.301 real 0m6.690s 00:10:05.301 user 0m5.407s 00:10:05.301 sys 0m0.353s 00:10:05.301 13:41:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.301 13:41:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:05.301 ************************************ 00:10:05.301 END TEST env_dpdk_post_init 00:10:05.301 ************************************ 00:10:05.301 13:41:04 env -- env/env.sh@26 -- # uname 00:10:05.301 13:41:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:05.301 13:41:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:05.301 13:41:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.301 13:41:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.301 13:41:04 env -- common/autotest_common.sh@10 -- # set +x 00:10:05.301 ************************************ 00:10:05.301 START TEST env_mem_callbacks 00:10:05.301 ************************************ 00:10:05.301 13:41:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:05.301 EAL: Detected CPU lcores: 112 00:10:05.301 EAL: Detected NUMA nodes: 2 00:10:05.301 EAL: Detected shared linkage of DPDK 00:10:05.301 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:05.301 EAL: Selected IOVA mode 'VA' 00:10:05.301 EAL: VFIO support initialized 00:10:05.301 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:05.301 00:10:05.301 00:10:05.301 CUnit - A unit testing framework for C - Version 2.1-3 00:10:05.301 http://cunit.sourceforge.net/ 00:10:05.301 00:10:05.301 00:10:05.301 Suite: memory 00:10:05.301 Test: test ... 00:10:05.301 register 0x200000200000 2097152 00:10:05.301 malloc 3145728 00:10:05.302 register 0x200000400000 4194304 00:10:05.302 buf 0x200000500000 len 3145728 PASSED 00:10:05.302 malloc 64 00:10:05.302 buf 0x2000004fff40 len 64 PASSED 00:10:05.302 malloc 4194304 00:10:05.302 register 0x200000800000 6291456 00:10:05.302 buf 0x200000a00000 len 4194304 PASSED 00:10:05.302 free 0x200000500000 3145728 00:10:05.302 free 0x2000004fff40 64 00:10:05.302 unregister 0x200000400000 4194304 PASSED 00:10:05.302 free 0x200000a00000 4194304 00:10:05.302 unregister 0x200000800000 6291456 PASSED 00:10:05.302 malloc 8388608 00:10:05.302 register 0x200000400000 10485760 00:10:05.302 buf 0x200000600000 len 8388608 PASSED 00:10:05.302 free 0x200000600000 8388608 00:10:05.302 unregister 0x200000400000 10485760 PASSED 00:10:05.302 passed 00:10:05.302 00:10:05.302 Run Summary: Type Total Ran Passed Failed Inactive 00:10:05.302 suites 1 1 n/a 0 0 00:10:05.302 tests 1 1 1 0 0 00:10:05.302 asserts 15 15 15 0 n/a 00:10:05.302 00:10:05.302 Elapsed time = 0.009 seconds 00:10:05.302 00:10:05.302 real 0m0.061s 00:10:05.302 user 0m0.021s 00:10:05.302 sys 0m0.040s 00:10:05.302 13:41:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.302 13:41:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 ************************************ 00:10:05.302 END TEST env_mem_callbacks 00:10:05.302 ************************************ 00:10:05.302 00:10:05.302 real 0m8.565s 00:10:05.302 user 0m6.445s 00:10:05.302 sys 0m1.203s 00:10:05.302 13:41:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.302 13:41:05 env -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 ************************************ 00:10:05.302 END TEST env 00:10:05.302 ************************************ 00:10:05.302 13:41:05 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:10:05.302 13:41:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.302 13:41:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.302 13:41:05 -- common/autotest_common.sh@10 -- # set +x 00:10:05.302 ************************************ 00:10:05.302 START TEST rpc 00:10:05.302 ************************************ 00:10:05.302 13:41:05 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:10:05.561 * Looking for test storage... 00:10:05.561 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.561 13:41:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.561 13:41:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.561 13:41:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.561 13:41:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.561 13:41:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.561 13:41:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.561 13:41:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.561 13:41:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.561 13:41:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.561 13:41:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.561 13:41:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.561 13:41:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:05.561 13:41:05 rpc -- scripts/common.sh@345 -- # : 1 00:10:05.561 13:41:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.561 13:41:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.561 13:41:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:05.561 13:41:05 rpc -- scripts/common.sh@353 -- # local d=1 00:10:05.561 13:41:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.561 13:41:05 rpc -- scripts/common.sh@355 -- # echo 1 00:10:05.561 13:41:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.561 13:41:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:05.561 13:41:05 rpc -- scripts/common.sh@353 -- # local d=2 00:10:05.561 13:41:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.561 13:41:05 rpc -- scripts/common.sh@355 -- # echo 2 00:10:05.561 13:41:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.561 13:41:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.561 13:41:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.561 13:41:05 rpc -- scripts/common.sh@368 -- # return 0 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.561 --rc genhtml_branch_coverage=1 00:10:05.561 --rc genhtml_function_coverage=1 00:10:05.561 --rc genhtml_legend=1 00:10:05.561 --rc geninfo_all_blocks=1 00:10:05.561 --rc geninfo_unexecuted_blocks=1 00:10:05.561 00:10:05.561 ' 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.561 --rc genhtml_branch_coverage=1 00:10:05.561 --rc genhtml_function_coverage=1 00:10:05.561 --rc genhtml_legend=1 00:10:05.561 --rc geninfo_all_blocks=1 00:10:05.561 --rc geninfo_unexecuted_blocks=1 00:10:05.561 00:10:05.561 ' 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.561 --rc genhtml_branch_coverage=1 00:10:05.561 --rc genhtml_function_coverage=1 00:10:05.561 --rc genhtml_legend=1 00:10:05.561 --rc geninfo_all_blocks=1 00:10:05.561 --rc geninfo_unexecuted_blocks=1 00:10:05.561 00:10:05.561 ' 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.561 --rc genhtml_branch_coverage=1 00:10:05.561 --rc genhtml_function_coverage=1 00:10:05.561 --rc genhtml_legend=1 00:10:05.561 --rc geninfo_all_blocks=1 00:10:05.561 --rc geninfo_unexecuted_blocks=1 00:10:05.561 00:10:05.561 ' 00:10:05.561 13:41:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1546420 00:10:05.561 13:41:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:05.561 13:41:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:10:05.561 13:41:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1546420 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 1546420 ']' 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.561 13:41:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.562 13:41:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.562 13:41:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.562 [2024-12-05 13:41:05.347088] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:05.562 [2024-12-05 13:41:05.347129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546420 ] 00:10:05.821 [2024-12-05 13:41:05.418981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.821 [2024-12-05 13:41:05.440103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:05.821 [2024-12-05 13:41:05.440137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1546420' to capture a snapshot of events at runtime. 00:10:05.821 [2024-12-05 13:41:05.440143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.821 [2024-12-05 13:41:05.440149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.821 [2024-12-05 13:41:05.440153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1546420 for offline analysis/debug. 00:10:05.821 [2024-12-05 13:41:05.440613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.821 13:41:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.821 13:41:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:10:05.821 13:41:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:10:05.821 13:41:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:10:05.821 13:41:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:05.821 13:41:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:05.821 13:41:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.821 13:41:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.821 13:41:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.079 ************************************ 00:10:06.080 START TEST rpc_integrity 00:10:06.080 ************************************ 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:06.080 { 00:10:06.080 "name": "Malloc0", 00:10:06.080 "aliases": [ 00:10:06.080 "e19972b9-337a-4dfd-8bfe-cd643150d0e2" 00:10:06.080 ], 00:10:06.080 "product_name": "Malloc disk", 00:10:06.080 "block_size": 512, 00:10:06.080 "num_blocks": 16384, 00:10:06.080 "uuid": "e19972b9-337a-4dfd-8bfe-cd643150d0e2", 00:10:06.080 "assigned_rate_limits": { 00:10:06.080 "rw_ios_per_sec": 0, 00:10:06.080 "rw_mbytes_per_sec": 0, 00:10:06.080 "r_mbytes_per_sec": 0, 00:10:06.080 "w_mbytes_per_sec": 0 00:10:06.080 }, 00:10:06.080 "claimed": false, 00:10:06.080 "zoned": false, 00:10:06.080 "supported_io_types": { 00:10:06.080 "read": true, 00:10:06.080 "write": true, 00:10:06.080 "unmap": true, 00:10:06.080 "flush": true, 00:10:06.080 "reset": true, 00:10:06.080 "nvme_admin": false, 00:10:06.080 "nvme_io": false, 00:10:06.080 "nvme_io_md": false, 00:10:06.080 "write_zeroes": true, 00:10:06.080 "zcopy": true, 00:10:06.080 "get_zone_info": false, 00:10:06.080 "zone_management": false, 00:10:06.080 "zone_append": false, 00:10:06.080 "compare": false, 00:10:06.080 "compare_and_write": false, 00:10:06.080 "abort": true, 00:10:06.080 "seek_hole": false, 00:10:06.080 "seek_data": false, 00:10:06.080 "copy": true, 00:10:06.080 "nvme_iov_md": false 00:10:06.080 }, 00:10:06.080 "memory_domains": [ 00:10:06.080 { 00:10:06.080 "dma_device_id": "system", 00:10:06.080 "dma_device_type": 1 00:10:06.080 }, 00:10:06.080 { 00:10:06.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.080 "dma_device_type": 2 00:10:06.080 } 00:10:06.080 ], 00:10:06.080 "driver_specific": {} 00:10:06.080 } 00:10:06.080 ]' 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.080 [2024-12-05 13:41:05.808392] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:06.080 [2024-12-05 13:41:05.808418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.080 [2024-12-05 13:41:05.808429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2467f20 00:10:06.080 [2024-12-05 13:41:05.808435] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.080 [2024-12-05 13:41:05.809428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.080 [2024-12-05 13:41:05.809447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:06.080 Passthru0 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.080 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.080 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:06.080 { 00:10:06.080 "name": "Malloc0", 00:10:06.080 "aliases": [ 00:10:06.080 "e19972b9-337a-4dfd-8bfe-cd643150d0e2" 00:10:06.080 ], 00:10:06.080 "product_name": "Malloc disk", 00:10:06.080 "block_size": 512, 00:10:06.080 "num_blocks": 16384, 00:10:06.080 "uuid": "e19972b9-337a-4dfd-8bfe-cd643150d0e2", 00:10:06.080 "assigned_rate_limits": { 00:10:06.080 "rw_ios_per_sec": 0, 00:10:06.080 "rw_mbytes_per_sec": 0, 00:10:06.080 "r_mbytes_per_sec": 0, 00:10:06.080 "w_mbytes_per_sec": 0 00:10:06.080 }, 00:10:06.080 "claimed": true, 00:10:06.080 "claim_type": "exclusive_write", 00:10:06.080 "zoned": false, 00:10:06.080 "supported_io_types": { 00:10:06.080 "read": true, 00:10:06.080 "write": true, 00:10:06.080 "unmap": true, 00:10:06.080 "flush": true, 00:10:06.080 "reset": true, 00:10:06.080 "nvme_admin": false, 00:10:06.080 "nvme_io": false, 00:10:06.080 "nvme_io_md": false, 00:10:06.080 "write_zeroes": true, 00:10:06.080 "zcopy": true, 00:10:06.080 "get_zone_info": false, 00:10:06.080 "zone_management": false, 00:10:06.080 "zone_append": false, 00:10:06.080 "compare": false, 00:10:06.080 "compare_and_write": false, 00:10:06.080 "abort": true, 00:10:06.080 "seek_hole": false, 00:10:06.080 "seek_data": false, 00:10:06.080 "copy": true, 00:10:06.080 "nvme_iov_md": false 00:10:06.080 }, 00:10:06.080 "memory_domains": [ 00:10:06.080 { 00:10:06.080 "dma_device_id": "system", 00:10:06.080 "dma_device_type": 1 00:10:06.080 }, 00:10:06.080 { 00:10:06.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.080 "dma_device_type": 2 00:10:06.080 } 00:10:06.080 ], 00:10:06.080 "driver_specific": {} 00:10:06.080 }, 00:10:06.080 { 00:10:06.080 "name": "Passthru0", 00:10:06.080 "aliases": [ 00:10:06.080 "6b77ca3b-d22b-576b-b1f0-e881d1b937e7" 00:10:06.080 ], 00:10:06.081 "product_name": "passthru", 00:10:06.081 "block_size": 512, 00:10:06.081 "num_blocks": 16384, 00:10:06.081 "uuid": "6b77ca3b-d22b-576b-b1f0-e881d1b937e7", 00:10:06.081 "assigned_rate_limits": { 00:10:06.081 "rw_ios_per_sec": 0, 00:10:06.081 "rw_mbytes_per_sec": 0, 00:10:06.081 "r_mbytes_per_sec": 0, 00:10:06.081 "w_mbytes_per_sec": 0 00:10:06.081 }, 00:10:06.081 "claimed": false, 00:10:06.081 "zoned": false, 00:10:06.081 "supported_io_types": { 00:10:06.081 "read": true, 00:10:06.081 "write": true, 00:10:06.081 "unmap": true, 00:10:06.081 "flush": true, 00:10:06.081 "reset": true, 00:10:06.081 "nvme_admin": false, 00:10:06.081 "nvme_io": false, 00:10:06.081 "nvme_io_md": false, 00:10:06.081 "write_zeroes": true, 00:10:06.081 "zcopy": true, 00:10:06.081 "get_zone_info": false, 00:10:06.081 "zone_management": false, 00:10:06.081 "zone_append": false, 00:10:06.081 "compare": false, 00:10:06.081 "compare_and_write": false, 00:10:06.081 "abort": true, 00:10:06.081 "seek_hole": false, 00:10:06.081 "seek_data": false, 00:10:06.081 "copy": true, 00:10:06.081 "nvme_iov_md": false 00:10:06.081 }, 00:10:06.081 "memory_domains": [ 00:10:06.081 { 00:10:06.081 "dma_device_id": "system", 00:10:06.081 "dma_device_type": 1 00:10:06.081 }, 00:10:06.081 { 00:10:06.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.081 "dma_device_type": 2 00:10:06.081 } 00:10:06.081 ], 00:10:06.081 "driver_specific": { 00:10:06.081 "passthru": { 00:10:06.081 "name": "Passthru0", 00:10:06.081 "base_bdev_name": "Malloc0" 00:10:06.081 } 00:10:06.081 } 00:10:06.081 } 00:10:06.081 ]' 00:10:06.081 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:06.081 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:06.081 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.081 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.081 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.081 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.081 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:06.081 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:06.339 13:41:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:06.339 00:10:06.339 real 0m0.274s 00:10:06.339 user 0m0.169s 00:10:06.339 sys 0m0.043s 00:10:06.339 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.339 13:41:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.339 ************************************ 00:10:06.339 END TEST rpc_integrity 00:10:06.339 ************************************ 00:10:06.339 13:41:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:06.340 13:41:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.340 13:41:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.340 13:41:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 ************************************ 00:10:06.340 START TEST rpc_plugins 00:10:06.340 ************************************ 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:06.340 { 00:10:06.340 "name": "Malloc1", 00:10:06.340 "aliases": [ 00:10:06.340 "9910c0d0-f19e-4208-b7ff-031629d805b0" 00:10:06.340 ], 00:10:06.340 "product_name": "Malloc disk", 00:10:06.340 "block_size": 4096, 00:10:06.340 "num_blocks": 256, 00:10:06.340 "uuid": "9910c0d0-f19e-4208-b7ff-031629d805b0", 00:10:06.340 "assigned_rate_limits": { 00:10:06.340 "rw_ios_per_sec": 0, 00:10:06.340 "rw_mbytes_per_sec": 0, 00:10:06.340 "r_mbytes_per_sec": 0, 00:10:06.340 "w_mbytes_per_sec": 0 00:10:06.340 }, 00:10:06.340 "claimed": false, 00:10:06.340 "zoned": false, 00:10:06.340 "supported_io_types": { 00:10:06.340 "read": true, 00:10:06.340 "write": true, 00:10:06.340 "unmap": true, 00:10:06.340 "flush": true, 00:10:06.340 "reset": true, 00:10:06.340 "nvme_admin": false, 00:10:06.340 "nvme_io": false, 00:10:06.340 "nvme_io_md": false, 00:10:06.340 "write_zeroes": true, 00:10:06.340 "zcopy": true, 00:10:06.340 "get_zone_info": false, 00:10:06.340 "zone_management": false, 00:10:06.340 "zone_append": false, 00:10:06.340 "compare": false, 00:10:06.340 "compare_and_write": false, 00:10:06.340 "abort": true, 00:10:06.340 "seek_hole": false, 00:10:06.340 "seek_data": false, 00:10:06.340 "copy": true, 00:10:06.340 "nvme_iov_md": false 00:10:06.340 }, 00:10:06.340 "memory_domains": [ 00:10:06.340 { 00:10:06.340 "dma_device_id": "system", 00:10:06.340 "dma_device_type": 1 00:10:06.340 }, 00:10:06.340 { 00:10:06.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.340 "dma_device_type": 2 00:10:06.340 } 00:10:06.340 ], 00:10:06.340 "driver_specific": {} 00:10:06.340 } 00:10:06.340 ]' 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:06.340 13:41:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:06.340 00:10:06.340 real 0m0.135s 00:10:06.340 user 0m0.077s 00:10:06.340 sys 0m0.024s 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.340 13:41:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 ************************************ 00:10:06.340 END TEST rpc_plugins 00:10:06.340 ************************************ 00:10:06.598 13:41:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:06.598 13:41:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.598 13:41:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.598 13:41:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.598 ************************************ 00:10:06.598 START TEST rpc_trace_cmd_test 00:10:06.598 ************************************ 00:10:06.598 13:41:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:10:06.598 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:06.598 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:06.598 13:41:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.598 13:41:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.598 13:41:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.598 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:06.598 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1546420", 00:10:06.598 "tpoint_group_mask": "0x8", 00:10:06.598 "iscsi_conn": { 00:10:06.598 "mask": "0x2", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "scsi": { 00:10:06.598 "mask": "0x4", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "bdev": { 00:10:06.598 "mask": "0x8", 00:10:06.598 "tpoint_mask": "0xffffffffffffffff" 00:10:06.598 }, 00:10:06.598 "nvmf_rdma": { 00:10:06.598 "mask": "0x10", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "nvmf_tcp": { 00:10:06.598 "mask": "0x20", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "ftl": { 00:10:06.598 "mask": "0x40", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "blobfs": { 00:10:06.598 "mask": "0x80", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "dsa": { 00:10:06.598 "mask": "0x200", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "thread": { 00:10:06.598 "mask": "0x400", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "nvme_pcie": { 00:10:06.598 "mask": "0x800", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "iaa": { 00:10:06.598 "mask": "0x1000", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "nvme_tcp": { 00:10:06.598 "mask": "0x2000", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "bdev_nvme": { 00:10:06.598 "mask": "0x4000", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.598 "sock": { 00:10:06.598 "mask": "0x8000", 00:10:06.598 "tpoint_mask": "0x0" 00:10:06.598 }, 00:10:06.599 "blob": { 00:10:06.599 "mask": "0x10000", 00:10:06.599 "tpoint_mask": "0x0" 00:10:06.599 }, 00:10:06.599 "bdev_raid": { 00:10:06.599 "mask": "0x20000", 00:10:06.599 "tpoint_mask": "0x0" 00:10:06.599 }, 00:10:06.599 "scheduler": { 00:10:06.599 "mask": "0x40000", 00:10:06.599 "tpoint_mask": "0x0" 00:10:06.599 } 00:10:06.599 }' 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:06.599 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:06.857 13:41:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:06.857 00:10:06.857 real 0m0.222s 00:10:06.857 user 0m0.191s 00:10:06.857 sys 0m0.024s 00:10:06.857 13:41:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.857 13:41:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 ************************************ 00:10:06.857 END TEST rpc_trace_cmd_test 00:10:06.857 ************************************ 00:10:06.857 13:41:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:06.857 13:41:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:06.857 13:41:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:06.857 13:41:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.857 13:41:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.857 13:41:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 ************************************ 00:10:06.857 START TEST rpc_daemon_integrity 00:10:06.857 ************************************ 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:06.857 { 00:10:06.857 "name": "Malloc2", 00:10:06.857 "aliases": [ 00:10:06.857 "53da0665-2579-4435-a418-045babbbf8c2" 00:10:06.857 ], 00:10:06.857 "product_name": "Malloc disk", 00:10:06.857 "block_size": 512, 00:10:06.857 "num_blocks": 16384, 00:10:06.857 "uuid": "53da0665-2579-4435-a418-045babbbf8c2", 00:10:06.857 "assigned_rate_limits": { 00:10:06.857 "rw_ios_per_sec": 0, 00:10:06.857 "rw_mbytes_per_sec": 0, 00:10:06.857 "r_mbytes_per_sec": 0, 00:10:06.857 "w_mbytes_per_sec": 0 00:10:06.857 }, 00:10:06.857 "claimed": false, 00:10:06.857 "zoned": false, 00:10:06.857 "supported_io_types": { 00:10:06.857 "read": true, 00:10:06.857 "write": true, 00:10:06.857 "unmap": true, 00:10:06.857 "flush": true, 00:10:06.857 "reset": true, 00:10:06.857 "nvme_admin": false, 00:10:06.857 "nvme_io": false, 00:10:06.857 "nvme_io_md": false, 00:10:06.857 "write_zeroes": true, 00:10:06.857 "zcopy": true, 00:10:06.857 "get_zone_info": false, 00:10:06.857 "zone_management": false, 00:10:06.857 "zone_append": false, 00:10:06.857 "compare": false, 00:10:06.857 "compare_and_write": false, 00:10:06.857 "abort": true, 00:10:06.857 "seek_hole": false, 00:10:06.857 "seek_data": false, 00:10:06.857 "copy": true, 00:10:06.857 "nvme_iov_md": false 00:10:06.857 }, 00:10:06.857 "memory_domains": [ 00:10:06.857 { 00:10:06.857 "dma_device_id": "system", 00:10:06.857 "dma_device_type": 1 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.857 "dma_device_type": 2 00:10:06.857 } 00:10:06.857 ], 00:10:06.857 "driver_specific": {} 00:10:06.857 } 00:10:06.857 ]' 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 [2024-12-05 13:41:06.650599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:06.857 [2024-12-05 13:41:06.650623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.857 [2024-12-05 13:41:06.650634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2469df0 00:10:06.857 [2024-12-05 13:41:06.650640] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.857 [2024-12-05 13:41:06.651535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.857 [2024-12-05 13:41:06.651554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:06.857 Passthru0 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:06.857 { 00:10:06.857 "name": "Malloc2", 00:10:06.857 "aliases": [ 00:10:06.857 "53da0665-2579-4435-a418-045babbbf8c2" 00:10:06.857 ], 00:10:06.857 "product_name": "Malloc disk", 00:10:06.857 "block_size": 512, 00:10:06.857 "num_blocks": 16384, 00:10:06.857 "uuid": "53da0665-2579-4435-a418-045babbbf8c2", 00:10:06.857 "assigned_rate_limits": { 00:10:06.857 "rw_ios_per_sec": 0, 00:10:06.857 "rw_mbytes_per_sec": 0, 00:10:06.857 "r_mbytes_per_sec": 0, 00:10:06.857 "w_mbytes_per_sec": 0 00:10:06.857 }, 00:10:06.857 "claimed": true, 00:10:06.857 "claim_type": "exclusive_write", 00:10:06.857 "zoned": false, 00:10:06.857 "supported_io_types": { 00:10:06.857 "read": true, 00:10:06.857 "write": true, 00:10:06.857 "unmap": true, 00:10:06.857 "flush": true, 00:10:06.857 "reset": true, 00:10:06.857 "nvme_admin": false, 00:10:06.857 "nvme_io": false, 00:10:06.857 "nvme_io_md": false, 00:10:06.857 "write_zeroes": true, 00:10:06.857 "zcopy": true, 00:10:06.857 "get_zone_info": false, 00:10:06.857 "zone_management": false, 00:10:06.857 "zone_append": false, 00:10:06.857 "compare": false, 00:10:06.858 "compare_and_write": false, 00:10:06.858 "abort": true, 00:10:06.858 "seek_hole": false, 00:10:06.858 "seek_data": false, 00:10:06.858 "copy": true, 00:10:06.858 "nvme_iov_md": false 00:10:06.858 }, 00:10:06.858 "memory_domains": [ 00:10:06.858 { 00:10:06.858 "dma_device_id": "system", 00:10:06.858 "dma_device_type": 1 00:10:06.858 }, 00:10:06.858 { 00:10:06.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.858 "dma_device_type": 2 00:10:06.858 } 00:10:06.858 ], 00:10:06.858 "driver_specific": {} 00:10:06.858 }, 00:10:06.858 { 00:10:06.858 "name": "Passthru0", 00:10:06.858 "aliases": [ 00:10:06.858 "d50e78f0-e0ad-5a5c-ba4a-d3009cec979a" 00:10:06.858 ], 00:10:06.858 "product_name": "passthru", 00:10:06.858 "block_size": 512, 00:10:06.858 "num_blocks": 16384, 00:10:06.858 "uuid": "d50e78f0-e0ad-5a5c-ba4a-d3009cec979a", 00:10:06.858 "assigned_rate_limits": { 00:10:06.858 "rw_ios_per_sec": 0, 00:10:06.858 "rw_mbytes_per_sec": 0, 00:10:06.858 "r_mbytes_per_sec": 0, 00:10:06.858 "w_mbytes_per_sec": 0 00:10:06.858 }, 00:10:06.858 "claimed": false, 00:10:06.858 "zoned": false, 00:10:06.858 "supported_io_types": { 00:10:06.858 "read": true, 00:10:06.858 "write": true, 00:10:06.858 "unmap": true, 00:10:06.858 "flush": true, 00:10:06.858 "reset": true, 00:10:06.858 "nvme_admin": false, 00:10:06.858 "nvme_io": false, 00:10:06.858 "nvme_io_md": false, 00:10:06.858 "write_zeroes": true, 00:10:06.858 "zcopy": true, 00:10:06.858 "get_zone_info": false, 00:10:06.858 "zone_management": false, 00:10:06.858 "zone_append": false, 00:10:06.858 "compare": false, 00:10:06.858 "compare_and_write": false, 00:10:06.858 "abort": true, 00:10:06.858 "seek_hole": false, 00:10:06.858 "seek_data": false, 00:10:06.858 "copy": true, 00:10:06.858 "nvme_iov_md": false 00:10:06.858 }, 00:10:06.858 "memory_domains": [ 00:10:06.858 { 00:10:06.858 "dma_device_id": "system", 00:10:06.858 "dma_device_type": 1 00:10:06.858 }, 00:10:06.858 { 00:10:06.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.858 "dma_device_type": 2 00:10:06.858 } 00:10:06.858 ], 00:10:06.858 "driver_specific": { 00:10:06.858 "passthru": { 00:10:06.858 "name": "Passthru0", 00:10:06.858 "base_bdev_name": "Malloc2" 00:10:06.858 } 00:10:06.858 } 00:10:06.858 } 00:10:06.858 ]' 00:10:06.858 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:07.117 00:10:07.117 real 0m0.280s 00:10:07.117 user 0m0.176s 00:10:07.117 sys 0m0.035s 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.117 13:41:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:07.117 ************************************ 00:10:07.117 END TEST rpc_daemon_integrity 00:10:07.117 ************************************ 00:10:07.117 13:41:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:07.117 13:41:06 rpc -- rpc/rpc.sh@84 -- # killprocess 1546420 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 1546420 ']' 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@958 -- # kill -0 1546420 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@959 -- # uname 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546420 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546420' 00:10:07.117 killing process with pid 1546420 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@973 -- # kill 1546420 00:10:07.117 13:41:06 rpc -- common/autotest_common.sh@978 -- # wait 1546420 00:10:07.376 00:10:07.376 real 0m2.046s 00:10:07.376 user 0m2.603s 00:10:07.376 sys 0m0.690s 00:10:07.376 13:41:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.376 13:41:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.376 ************************************ 00:10:07.376 END TEST rpc 00:10:07.376 ************************************ 00:10:07.376 13:41:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:07.376 13:41:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.376 13:41:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.376 13:41:07 -- common/autotest_common.sh@10 -- # set +x 00:10:07.635 ************************************ 00:10:07.635 START TEST skip_rpc 00:10:07.635 ************************************ 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:07.635 * Looking for test storage... 00:10:07.635 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.635 13:41:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.635 --rc genhtml_branch_coverage=1 00:10:07.635 --rc genhtml_function_coverage=1 00:10:07.635 --rc genhtml_legend=1 00:10:07.635 --rc geninfo_all_blocks=1 00:10:07.635 --rc geninfo_unexecuted_blocks=1 00:10:07.635 00:10:07.635 ' 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.635 --rc genhtml_branch_coverage=1 00:10:07.635 --rc genhtml_function_coverage=1 00:10:07.635 --rc genhtml_legend=1 00:10:07.635 --rc geninfo_all_blocks=1 00:10:07.635 --rc geninfo_unexecuted_blocks=1 00:10:07.635 00:10:07.635 ' 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.635 --rc genhtml_branch_coverage=1 00:10:07.635 --rc genhtml_function_coverage=1 00:10:07.635 --rc genhtml_legend=1 00:10:07.635 --rc geninfo_all_blocks=1 00:10:07.635 --rc geninfo_unexecuted_blocks=1 00:10:07.635 00:10:07.635 ' 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.635 --rc genhtml_branch_coverage=1 00:10:07.635 --rc genhtml_function_coverage=1 00:10:07.635 --rc genhtml_legend=1 00:10:07.635 --rc geninfo_all_blocks=1 00:10:07.635 --rc geninfo_unexecuted_blocks=1 00:10:07.635 00:10:07.635 ' 00:10:07.635 13:41:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:10:07.635 13:41:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:10:07.635 13:41:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.635 13:41:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.635 ************************************ 00:10:07.635 START TEST skip_rpc 00:10:07.635 ************************************ 00:10:07.635 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:07.635 13:41:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1546876 00:10:07.635 13:41:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:07.635 13:41:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:07.635 13:41:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:07.938 [2024-12-05 13:41:07.487477] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:07.938 [2024-12-05 13:41:07.487513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546876 ] 00:10:07.938 [2024-12-05 13:41:07.560029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.938 [2024-12-05 13:41:07.581684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1546876 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1546876 ']' 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1546876 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546876 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546876' 00:10:13.320 killing process with pid 1546876 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1546876 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1546876 00:10:13.320 00:10:13.320 real 0m5.353s 00:10:13.320 user 0m5.103s 00:10:13.320 sys 0m0.288s 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.320 13:41:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.320 ************************************ 00:10:13.320 END TEST skip_rpc 00:10:13.320 ************************************ 00:10:13.320 13:41:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:13.320 13:41:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.320 13:41:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.320 13:41:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.320 ************************************ 00:10:13.320 START TEST skip_rpc_with_json 00:10:13.320 ************************************ 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1547962 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1547962 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1547962 ']' 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.320 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:13.320 [2024-12-05 13:41:12.913181] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:13.320 [2024-12-05 13:41:12.913223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547962 ] 00:10:13.320 [2024-12-05 13:41:12.986420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.320 [2024-12-05 13:41:13.005438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:13.580 [2024-12-05 13:41:13.212907] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:13.580 request: 00:10:13.580 { 00:10:13.580 "trtype": "tcp", 00:10:13.580 "method": "nvmf_get_transports", 00:10:13.580 "req_id": 1 00:10:13.580 } 00:10:13.580 Got JSON-RPC error response 00:10:13.580 response: 00:10:13.580 { 00:10:13.580 "code": -19, 00:10:13.580 "message": "No such device" 00:10:13.580 } 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:13.580 [2024-12-05 13:41:13.225011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.580 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:10:13.580 { 00:10:13.580 "subsystems": [ 00:10:13.580 { 00:10:13.580 "subsystem": "fsdev", 00:10:13.580 "config": [ 00:10:13.580 { 00:10:13.580 "method": "fsdev_set_opts", 00:10:13.580 "params": { 00:10:13.580 "fsdev_io_pool_size": 65535, 00:10:13.580 "fsdev_io_cache_size": 256 00:10:13.580 } 00:10:13.580 } 00:10:13.580 ] 00:10:13.580 }, 00:10:13.580 { 00:10:13.580 "subsystem": "keyring", 00:10:13.580 "config": [] 00:10:13.580 }, 00:10:13.580 { 00:10:13.580 "subsystem": "iobuf", 00:10:13.580 "config": [ 00:10:13.580 { 00:10:13.580 "method": "iobuf_set_options", 00:10:13.580 "params": { 00:10:13.580 "small_pool_count": 8192, 00:10:13.580 "large_pool_count": 1024, 00:10:13.580 "small_bufsize": 8192, 00:10:13.580 "large_bufsize": 135168, 00:10:13.580 "enable_numa": false 00:10:13.580 } 00:10:13.580 } 00:10:13.580 ] 00:10:13.580 }, 00:10:13.580 { 00:10:13.580 "subsystem": "sock", 00:10:13.580 "config": [ 00:10:13.580 { 00:10:13.580 "method": "sock_set_default_impl", 00:10:13.580 "params": { 00:10:13.580 "impl_name": "posix" 00:10:13.580 } 00:10:13.580 }, 00:10:13.580 { 00:10:13.580 "method": "sock_impl_set_options", 00:10:13.580 "params": { 00:10:13.580 "impl_name": "ssl", 00:10:13.580 "recv_buf_size": 4096, 00:10:13.580 "send_buf_size": 4096, 00:10:13.580 "enable_recv_pipe": true, 00:10:13.580 "enable_quickack": false, 00:10:13.580 "enable_placement_id": 0, 00:10:13.580 "enable_zerocopy_send_server": true, 00:10:13.580 "enable_zerocopy_send_client": false, 00:10:13.581 "zerocopy_threshold": 0, 00:10:13.581 "tls_version": 0, 00:10:13.581 "enable_ktls": false 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "sock_impl_set_options", 00:10:13.581 "params": { 00:10:13.581 "impl_name": "posix", 00:10:13.581 "recv_buf_size": 2097152, 00:10:13.581 "send_buf_size": 2097152, 00:10:13.581 "enable_recv_pipe": true, 00:10:13.581 "enable_quickack": false, 00:10:13.581 "enable_placement_id": 0, 00:10:13.581 "enable_zerocopy_send_server": true, 00:10:13.581 "enable_zerocopy_send_client": false, 00:10:13.581 "zerocopy_threshold": 0, 00:10:13.581 "tls_version": 0, 00:10:13.581 "enable_ktls": false 00:10:13.581 } 00:10:13.581 } 00:10:13.581 ] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "vmd", 00:10:13.581 "config": [] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "accel", 00:10:13.581 "config": [ 00:10:13.581 { 00:10:13.581 "method": "accel_set_options", 00:10:13.581 "params": { 00:10:13.581 "small_cache_size": 128, 00:10:13.581 "large_cache_size": 16, 00:10:13.581 "task_count": 2048, 00:10:13.581 "sequence_count": 2048, 00:10:13.581 "buf_count": 2048 00:10:13.581 } 00:10:13.581 } 00:10:13.581 ] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "bdev", 00:10:13.581 "config": [ 00:10:13.581 { 00:10:13.581 "method": "bdev_set_options", 00:10:13.581 "params": { 00:10:13.581 "bdev_io_pool_size": 65535, 00:10:13.581 "bdev_io_cache_size": 256, 00:10:13.581 "bdev_auto_examine": true, 00:10:13.581 "iobuf_small_cache_size": 128, 00:10:13.581 "iobuf_large_cache_size": 16 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "bdev_raid_set_options", 00:10:13.581 "params": { 00:10:13.581 "process_window_size_kb": 1024, 00:10:13.581 "process_max_bandwidth_mb_sec": 0 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "bdev_iscsi_set_options", 00:10:13.581 "params": { 00:10:13.581 "timeout_sec": 30 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "bdev_nvme_set_options", 00:10:13.581 "params": { 00:10:13.581 "action_on_timeout": "none", 00:10:13.581 "timeout_us": 0, 00:10:13.581 "timeout_admin_us": 0, 00:10:13.581 "keep_alive_timeout_ms": 10000, 00:10:13.581 "arbitration_burst": 0, 00:10:13.581 "low_priority_weight": 0, 00:10:13.581 "medium_priority_weight": 0, 00:10:13.581 "high_priority_weight": 0, 00:10:13.581 "nvme_adminq_poll_period_us": 10000, 00:10:13.581 "nvme_ioq_poll_period_us": 0, 00:10:13.581 "io_queue_requests": 0, 00:10:13.581 "delay_cmd_submit": true, 00:10:13.581 "transport_retry_count": 4, 00:10:13.581 "bdev_retry_count": 3, 00:10:13.581 "transport_ack_timeout": 0, 00:10:13.581 "ctrlr_loss_timeout_sec": 0, 00:10:13.581 "reconnect_delay_sec": 0, 00:10:13.581 "fast_io_fail_timeout_sec": 0, 00:10:13.581 "disable_auto_failback": false, 00:10:13.581 "generate_uuids": false, 00:10:13.581 "transport_tos": 0, 00:10:13.581 "nvme_error_stat": false, 00:10:13.581 "rdma_srq_size": 0, 00:10:13.581 "io_path_stat": false, 00:10:13.581 "allow_accel_sequence": false, 00:10:13.581 "rdma_max_cq_size": 0, 00:10:13.581 "rdma_cm_event_timeout_ms": 0, 00:10:13.581 "dhchap_digests": [ 00:10:13.581 "sha256", 00:10:13.581 "sha384", 00:10:13.581 "sha512" 00:10:13.581 ], 00:10:13.581 "dhchap_dhgroups": [ 00:10:13.581 "null", 00:10:13.581 "ffdhe2048", 00:10:13.581 "ffdhe3072", 00:10:13.581 "ffdhe4096", 00:10:13.581 "ffdhe6144", 00:10:13.581 "ffdhe8192" 00:10:13.581 ] 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "bdev_nvme_set_hotplug", 00:10:13.581 "params": { 00:10:13.581 "period_us": 100000, 00:10:13.581 "enable": false 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "bdev_wait_for_examine" 00:10:13.581 } 00:10:13.581 ] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "scsi", 00:10:13.581 "config": null 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "scheduler", 00:10:13.581 "config": [ 00:10:13.581 { 00:10:13.581 "method": "framework_set_scheduler", 00:10:13.581 "params": { 00:10:13.581 "name": "static" 00:10:13.581 } 00:10:13.581 } 00:10:13.581 ] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "vhost_scsi", 00:10:13.581 "config": [] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "vhost_blk", 00:10:13.581 "config": [] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "ublk", 00:10:13.581 "config": [] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "nbd", 00:10:13.581 "config": [] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "nvmf", 00:10:13.581 "config": [ 00:10:13.581 { 00:10:13.581 "method": "nvmf_set_config", 00:10:13.581 "params": { 00:10:13.581 "discovery_filter": "match_any", 00:10:13.581 "admin_cmd_passthru": { 00:10:13.581 "identify_ctrlr": false 00:10:13.581 }, 00:10:13.581 "dhchap_digests": [ 00:10:13.581 "sha256", 00:10:13.581 "sha384", 00:10:13.581 "sha512" 00:10:13.581 ], 00:10:13.581 "dhchap_dhgroups": [ 00:10:13.581 "null", 00:10:13.581 "ffdhe2048", 00:10:13.581 "ffdhe3072", 00:10:13.581 "ffdhe4096", 00:10:13.581 "ffdhe6144", 00:10:13.581 "ffdhe8192" 00:10:13.581 ] 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "nvmf_set_max_subsystems", 00:10:13.581 "params": { 00:10:13.581 "max_subsystems": 1024 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "nvmf_set_crdt", 00:10:13.581 "params": { 00:10:13.581 "crdt1": 0, 00:10:13.581 "crdt2": 0, 00:10:13.581 "crdt3": 0 00:10:13.581 } 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "method": "nvmf_create_transport", 00:10:13.581 "params": { 00:10:13.581 "trtype": "TCP", 00:10:13.581 "max_queue_depth": 128, 00:10:13.581 "max_io_qpairs_per_ctrlr": 127, 00:10:13.581 "in_capsule_data_size": 4096, 00:10:13.581 "max_io_size": 131072, 00:10:13.581 "io_unit_size": 131072, 00:10:13.581 "max_aq_depth": 128, 00:10:13.581 "num_shared_buffers": 511, 00:10:13.581 "buf_cache_size": 4294967295, 00:10:13.581 "dif_insert_or_strip": false, 00:10:13.581 "zcopy": false, 00:10:13.581 "c2h_success": true, 00:10:13.581 "sock_priority": 0, 00:10:13.581 "abort_timeout_sec": 1, 00:10:13.581 "ack_timeout": 0, 00:10:13.581 "data_wr_pool_size": 0 00:10:13.581 } 00:10:13.581 } 00:10:13.581 ] 00:10:13.581 }, 00:10:13.581 { 00:10:13.581 "subsystem": "iscsi", 00:10:13.581 "config": [ 00:10:13.581 { 00:10:13.581 "method": "iscsi_set_options", 00:10:13.581 "params": { 00:10:13.581 "node_base": "iqn.2016-06.io.spdk", 00:10:13.581 "max_sessions": 128, 00:10:13.581 "max_connections_per_session": 2, 00:10:13.581 "max_queue_depth": 64, 00:10:13.581 "default_time2wait": 2, 00:10:13.581 "default_time2retain": 20, 00:10:13.581 "first_burst_length": 8192, 00:10:13.581 "immediate_data": true, 00:10:13.581 "allow_duplicated_isid": false, 00:10:13.581 "error_recovery_level": 0, 00:10:13.581 "nop_timeout": 60, 00:10:13.581 "nop_in_interval": 30, 00:10:13.581 "disable_chap": false, 00:10:13.581 "require_chap": false, 00:10:13.581 "mutual_chap": false, 00:10:13.581 "chap_group": 0, 00:10:13.581 "max_large_datain_per_connection": 64, 00:10:13.581 "max_r2t_per_connection": 4, 00:10:13.581 "pdu_pool_size": 36864, 00:10:13.581 "immediate_data_pool_size": 16384, 00:10:13.581 "data_out_pool_size": 2048 00:10:13.581 } 00:10:13.581 } 00:10:13.581 ] 00:10:13.581 } 00:10:13.581 ] 00:10:13.581 } 00:10:13.581 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:13.581 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1547962 00:10:13.581 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1547962 ']' 00:10:13.581 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1547962 00:10:13.581 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:13.581 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.581 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1547962 00:10:13.840 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.840 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.840 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1547962' 00:10:13.840 killing process with pid 1547962 00:10:13.840 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1547962 00:10:13.840 13:41:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1547962 00:10:14.100 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1548226 00:10:14.100 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:10:14.100 13:41:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1548226 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1548226 ']' 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1548226 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1548226 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1548226' 00:10:19.369 killing process with pid 1548226 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1548226 00:10:19.369 13:41:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1548226 00:10:19.369 13:41:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:10:19.369 13:41:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:10:19.369 00:10:19.369 real 0m6.215s 00:10:19.369 user 0m5.905s 00:10:19.369 sys 0m0.604s 00:10:19.369 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.369 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:19.369 ************************************ 00:10:19.369 END TEST skip_rpc_with_json 00:10:19.369 ************************************ 00:10:19.370 13:41:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:19.370 13:41:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.370 13:41:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.370 13:41:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.370 ************************************ 00:10:19.370 START TEST skip_rpc_with_delay 00:10:19.370 ************************************ 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:19.370 [2024-12-05 13:41:19.203220] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:19.370 00:10:19.370 real 0m0.068s 00:10:19.370 user 0m0.044s 00:10:19.370 sys 0m0.023s 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.370 13:41:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:19.370 ************************************ 00:10:19.370 END TEST skip_rpc_with_delay 00:10:19.370 ************************************ 00:10:19.629 13:41:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:19.629 13:41:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:19.629 13:41:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:19.629 13:41:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.629 13:41:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.629 13:41:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.629 ************************************ 00:10:19.629 START TEST exit_on_failed_rpc_init 00:10:19.629 ************************************ 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1549272 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1549272 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1549272 ']' 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.629 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:19.629 [2024-12-05 13:41:19.339038] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:19.629 [2024-12-05 13:41:19.339080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549272 ] 00:10:19.629 [2024-12-05 13:41:19.416590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.629 [2024-12-05 13:41:19.439129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:19.888 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:19.888 [2024-12-05 13:41:19.697662] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:19.888 [2024-12-05 13:41:19.697708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549333 ] 00:10:20.147 [2024-12-05 13:41:19.771065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.147 [2024-12-05 13:41:19.792114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.147 [2024-12-05 13:41:19.792165] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:20.147 [2024-12-05 13:41:19.792174] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:20.147 [2024-12-05 13:41:19.792179] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1549272 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1549272 ']' 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1549272 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549272 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549272' 00:10:20.147 killing process with pid 1549272 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1549272 00:10:20.147 13:41:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1549272 00:10:20.406 00:10:20.406 real 0m0.878s 00:10:20.406 user 0m0.895s 00:10:20.406 sys 0m0.387s 00:10:20.406 13:41:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.406 13:41:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:20.406 ************************************ 00:10:20.406 END TEST exit_on_failed_rpc_init 00:10:20.406 ************************************ 00:10:20.406 13:41:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:10:20.406 00:10:20.406 real 0m12.966s 00:10:20.406 user 0m12.158s 00:10:20.406 sys 0m1.577s 00:10:20.406 13:41:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.406 13:41:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.406 ************************************ 00:10:20.406 END TEST skip_rpc 00:10:20.406 ************************************ 00:10:20.406 13:41:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:20.406 13:41:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.406 13:41:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.406 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:10:20.666 ************************************ 00:10:20.666 START TEST rpc_client 00:10:20.666 ************************************ 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:20.666 * Looking for test storage... 00:10:20.666 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.666 13:41:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.666 --rc genhtml_branch_coverage=1 00:10:20.666 --rc genhtml_function_coverage=1 00:10:20.666 --rc genhtml_legend=1 00:10:20.666 --rc geninfo_all_blocks=1 00:10:20.666 --rc geninfo_unexecuted_blocks=1 00:10:20.666 00:10:20.666 ' 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.666 --rc genhtml_branch_coverage=1 00:10:20.666 --rc genhtml_function_coverage=1 00:10:20.666 --rc genhtml_legend=1 00:10:20.666 --rc geninfo_all_blocks=1 00:10:20.666 --rc geninfo_unexecuted_blocks=1 00:10:20.666 00:10:20.666 ' 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.666 --rc genhtml_branch_coverage=1 00:10:20.666 --rc genhtml_function_coverage=1 00:10:20.666 --rc genhtml_legend=1 00:10:20.666 --rc geninfo_all_blocks=1 00:10:20.666 --rc geninfo_unexecuted_blocks=1 00:10:20.666 00:10:20.666 ' 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.666 --rc genhtml_branch_coverage=1 00:10:20.666 --rc genhtml_function_coverage=1 00:10:20.666 --rc genhtml_legend=1 00:10:20.666 --rc geninfo_all_blocks=1 00:10:20.666 --rc geninfo_unexecuted_blocks=1 00:10:20.666 00:10:20.666 ' 00:10:20.666 13:41:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:10:20.666 OK 00:10:20.666 13:41:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:20.666 00:10:20.666 real 0m0.200s 00:10:20.666 user 0m0.119s 00:10:20.666 sys 0m0.093s 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.666 13:41:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:20.666 ************************************ 00:10:20.666 END TEST rpc_client 00:10:20.666 ************************************ 00:10:20.666 13:41:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:10:20.666 13:41:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.666 13:41:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.666 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:10:20.925 ************************************ 00:10:20.925 START TEST json_config 00:10:20.925 ************************************ 00:10:20.925 13:41:20 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:10:20.925 13:41:20 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.925 13:41:20 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.925 13:41:20 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.925 13:41:20 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.925 13:41:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.925 13:41:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.925 13:41:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.925 13:41:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.925 13:41:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.925 13:41:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.925 13:41:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.925 13:41:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.925 13:41:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.925 13:41:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.925 13:41:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.925 13:41:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:20.925 13:41:20 json_config -- scripts/common.sh@345 -- # : 1 00:10:20.925 13:41:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.925 13:41:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.925 13:41:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:20.925 13:41:20 json_config -- scripts/common.sh@353 -- # local d=1 00:10:20.925 13:41:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.925 13:41:20 json_config -- scripts/common.sh@355 -- # echo 1 00:10:20.925 13:41:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.925 13:41:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:20.925 13:41:20 json_config -- scripts/common.sh@353 -- # local d=2 00:10:20.925 13:41:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.925 13:41:20 json_config -- scripts/common.sh@355 -- # echo 2 00:10:20.925 13:41:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.925 13:41:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.925 13:41:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.925 13:41:20 json_config -- scripts/common.sh@368 -- # return 0 00:10:20.925 13:41:20 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.925 13:41:20 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.925 --rc genhtml_branch_coverage=1 00:10:20.925 --rc genhtml_function_coverage=1 00:10:20.925 --rc genhtml_legend=1 00:10:20.925 --rc geninfo_all_blocks=1 00:10:20.925 --rc geninfo_unexecuted_blocks=1 00:10:20.925 00:10:20.925 ' 00:10:20.925 13:41:20 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.925 --rc genhtml_branch_coverage=1 00:10:20.925 --rc genhtml_function_coverage=1 00:10:20.926 --rc genhtml_legend=1 00:10:20.926 --rc geninfo_all_blocks=1 00:10:20.926 --rc geninfo_unexecuted_blocks=1 00:10:20.926 00:10:20.926 ' 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.926 --rc genhtml_branch_coverage=1 00:10:20.926 --rc genhtml_function_coverage=1 00:10:20.926 --rc genhtml_legend=1 00:10:20.926 --rc geninfo_all_blocks=1 00:10:20.926 --rc geninfo_unexecuted_blocks=1 00:10:20.926 00:10:20.926 ' 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.926 --rc genhtml_branch_coverage=1 00:10:20.926 --rc genhtml_function_coverage=1 00:10:20.926 --rc genhtml_legend=1 00:10:20.926 --rc geninfo_all_blocks=1 00:10:20.926 --rc geninfo_unexecuted_blocks=1 00:10:20.926 00:10:20.926 ' 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:20.926 13:41:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.926 13:41:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.926 13:41:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.926 13:41:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.926 13:41:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.926 13:41:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.926 13:41:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.926 13:41:20 json_config -- paths/export.sh@5 -- # export PATH 00:10:20.926 13:41:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@51 -- # : 0 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.926 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.926 13:41:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:10:20.926 INFO: JSON configuration test init 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.926 13:41:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:10:20.926 13:41:20 json_config -- json_config/common.sh@9 -- # local app=target 00:10:20.926 13:41:20 json_config -- json_config/common.sh@10 -- # shift 00:10:20.926 13:41:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:20.926 13:41:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:20.926 13:41:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:20.926 13:41:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:20.926 13:41:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:20.926 13:41:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1549666 00:10:20.926 13:41:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:20.926 Waiting for target to run... 00:10:20.926 13:41:20 json_config -- json_config/common.sh@25 -- # waitforlisten 1549666 /var/tmp/spdk_tgt.sock 00:10:20.926 13:41:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 1549666 ']' 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:20.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.926 13:41:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:21.185 [2024-12-05 13:41:20.785574] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:21.185 [2024-12-05 13:41:20.785620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549666 ] 00:10:21.443 [2024-12-05 13:41:21.214160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.443 [2024-12-05 13:41:21.236114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.009 13:41:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.009 13:41:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:22.009 13:41:21 json_config -- json_config/common.sh@26 -- # echo '' 00:10:22.009 00:10:22.009 13:41:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:10:22.009 13:41:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:10:22.009 13:41:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.009 13:41:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:22.009 13:41:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:10:22.009 13:41:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:10:22.009 13:41:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.009 13:41:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:22.009 13:41:21 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:22.009 13:41:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:10:22.009 13:41:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:10:25.324 13:41:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@51 -- # local get_types 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@54 -- # sort 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@62 -- # return 0 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:10:25.324 13:41:24 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.324 13:41:24 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.324 13:41:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:31.892 13:41:30 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@320 -- # e810=() 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@321 -- # x722=() 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@322 -- # mlx=() 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:31.893 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:31.893 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:31.893 Found net devices under 0000:18:00.0: mlx_0_0 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:31.893 Found net devices under 0000:18:00.1: mlx_0_1 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@62 -- # uname 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@109 -- # continue 2 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@109 -- # continue 2 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:31.893 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:31.893 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:31.893 altname enp24s0f0np0 00:10:31.893 altname ens785f0np0 00:10:31.893 inet 192.168.100.8/24 scope global mlx_0_0 00:10:31.893 valid_lft forever preferred_lft forever 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:31.893 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:31.893 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:31.893 altname enp24s0f1np1 00:10:31.893 altname ens785f1np1 00:10:31.893 inet 192.168.100.9/24 scope global mlx_0_1 00:10:31.893 valid_lft forever preferred_lft forever 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@450 -- # return 0 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:31.893 13:41:30 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@109 -- # continue 2 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@109 -- # continue 2 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:31.894 192.168.100.9' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:31.894 192.168.100.9' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@485 -- # head -n 1 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:31.894 192.168.100.9' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@486 -- # head -n 1 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:31.894 13:41:30 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:31.894 13:41:30 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:10:31.894 13:41:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:31.894 13:41:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:31.894 MallocForNvmf0 00:10:31.894 13:41:30 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:31.894 13:41:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:31.894 MallocForNvmf1 00:10:31.894 13:41:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:10:31.894 13:41:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:10:31.894 [2024-12-05 13:41:31.307223] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:31.894 [2024-12-05 13:41:31.333777] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x90f5f0/0x7f0080) succeed. 00:10:31.894 [2024-12-05 13:41:31.344290] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x912840/0x8700c0) succeed. 00:10:31.894 13:41:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.894 13:41:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.894 13:41:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:31.894 13:41:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:32.153 13:41:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:32.153 13:41:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:32.153 13:41:31 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:32.153 13:41:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:32.411 [2024-12-05 13:41:32.095587] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:32.411 13:41:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:10:32.411 13:41:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.411 13:41:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:32.411 13:41:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:10:32.411 13:41:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.411 13:41:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:32.411 13:41:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:10:32.411 13:41:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:32.411 13:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:32.670 MallocBdevForConfigChangeCheck 00:10:32.670 13:41:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:10:32.670 13:41:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.670 13:41:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:32.670 13:41:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:10:32.670 13:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:32.929 13:41:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:10:32.929 INFO: shutting down applications... 00:10:32.929 13:41:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:10:32.929 13:41:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:10:32.929 13:41:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:10:32.929 13:41:32 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:37.144 Calling clear_iscsi_subsystem 00:10:37.144 Calling clear_nvmf_subsystem 00:10:37.144 Calling clear_nbd_subsystem 00:10:37.144 Calling clear_ublk_subsystem 00:10:37.144 Calling clear_vhost_blk_subsystem 00:10:37.144 Calling clear_vhost_scsi_subsystem 00:10:37.144 Calling clear_bdev_subsystem 00:10:37.144 13:41:36 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:10:37.144 13:41:36 json_config -- json_config/json_config.sh@350 -- # count=100 00:10:37.144 13:41:36 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:10:37.144 13:41:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:37.144 13:41:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:37.144 13:41:36 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:10:37.403 13:41:37 json_config -- json_config/json_config.sh@352 -- # break 00:10:37.403 13:41:37 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:10:37.403 13:41:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:10:37.403 13:41:37 json_config -- json_config/common.sh@31 -- # local app=target 00:10:37.403 13:41:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:37.403 13:41:37 json_config -- json_config/common.sh@35 -- # [[ -n 1549666 ]] 00:10:37.403 13:41:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1549666 00:10:37.403 13:41:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:37.403 13:41:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:37.403 13:41:37 json_config -- json_config/common.sh@41 -- # kill -0 1549666 00:10:37.403 13:41:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:37.971 13:41:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:37.971 13:41:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:37.971 13:41:37 json_config -- json_config/common.sh@41 -- # kill -0 1549666 00:10:37.971 13:41:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:37.971 13:41:37 json_config -- json_config/common.sh@43 -- # break 00:10:37.971 13:41:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:37.971 13:41:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:37.971 SPDK target shutdown done 00:10:37.971 13:41:37 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:10:37.971 INFO: relaunching applications... 00:10:37.971 13:41:37 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:10:37.971 13:41:37 json_config -- json_config/common.sh@9 -- # local app=target 00:10:37.971 13:41:37 json_config -- json_config/common.sh@10 -- # shift 00:10:37.971 13:41:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:37.971 13:41:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:37.971 13:41:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:37.971 13:41:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:37.971 13:41:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:37.971 13:41:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1554893 00:10:37.971 13:41:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:37.971 Waiting for target to run... 00:10:37.971 13:41:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:10:37.971 13:41:37 json_config -- json_config/common.sh@25 -- # waitforlisten 1554893 /var/tmp/spdk_tgt.sock 00:10:37.971 13:41:37 json_config -- common/autotest_common.sh@835 -- # '[' -z 1554893 ']' 00:10:37.971 13:41:37 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:37.971 13:41:37 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.971 13:41:37 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:37.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:37.971 13:41:37 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.971 13:41:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:37.971 [2024-12-05 13:41:37.570695] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:37.971 [2024-12-05 13:41:37.570751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554893 ] 00:10:38.230 [2024-12-05 13:41:38.013243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.230 [2024-12-05 13:41:38.031357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.517 [2024-12-05 13:41:41.067623] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b0f890/0x1b19c60) succeed. 00:10:41.517 [2024-12-05 13:41:41.078174] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b12ae0/0x1b99cc0) succeed. 00:10:41.517 [2024-12-05 13:41:41.125961] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:42.084 13:41:41 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.084 13:41:41 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:42.084 13:41:41 json_config -- json_config/common.sh@26 -- # echo '' 00:10:42.084 00:10:42.084 13:41:41 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:10:42.084 13:41:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:42.084 INFO: Checking if target configuration is the same... 00:10:42.084 13:41:41 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:10:42.084 13:41:41 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:10:42.084 13:41:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:42.084 + '[' 2 -ne 2 ']' 00:10:42.084 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:42.084 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:10:42.084 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:42.085 +++ basename /dev/fd/62 00:10:42.085 ++ mktemp /tmp/62.XXX 00:10:42.085 + tmp_file_1=/tmp/62.PFx 00:10:42.085 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:10:42.085 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:42.085 + tmp_file_2=/tmp/spdk_tgt_config.json.8Tj 00:10:42.085 + ret=0 00:10:42.085 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:42.344 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:42.344 + diff -u /tmp/62.PFx /tmp/spdk_tgt_config.json.8Tj 00:10:42.344 + echo 'INFO: JSON config files are the same' 00:10:42.344 INFO: JSON config files are the same 00:10:42.344 + rm /tmp/62.PFx /tmp/spdk_tgt_config.json.8Tj 00:10:42.344 + exit 0 00:10:42.344 13:41:42 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:10:42.344 13:41:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:42.344 INFO: changing configuration and checking if this can be detected... 00:10:42.344 13:41:42 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:42.344 13:41:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:42.604 13:41:42 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:10:42.604 13:41:42 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:10:42.604 13:41:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:42.604 + '[' 2 -ne 2 ']' 00:10:42.604 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:42.604 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:10:42.604 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:42.604 +++ basename /dev/fd/62 00:10:42.604 ++ mktemp /tmp/62.XXX 00:10:42.604 + tmp_file_1=/tmp/62.Ygo 00:10:42.604 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:10:42.604 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:42.604 + tmp_file_2=/tmp/spdk_tgt_config.json.fiZ 00:10:42.604 + ret=0 00:10:42.604 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:42.864 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:42.864 + diff -u /tmp/62.Ygo /tmp/spdk_tgt_config.json.fiZ 00:10:42.864 + ret=1 00:10:42.864 + echo '=== Start of file: /tmp/62.Ygo ===' 00:10:42.864 + cat /tmp/62.Ygo 00:10:42.864 + echo '=== End of file: /tmp/62.Ygo ===' 00:10:42.864 + echo '' 00:10:42.864 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fiZ ===' 00:10:42.864 + cat /tmp/spdk_tgt_config.json.fiZ 00:10:42.864 + echo '=== End of file: /tmp/spdk_tgt_config.json.fiZ ===' 00:10:42.864 + echo '' 00:10:42.864 + rm /tmp/62.Ygo /tmp/spdk_tgt_config.json.fiZ 00:10:42.864 + exit 1 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:10:42.864 INFO: configuration change detected. 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:10:42.864 13:41:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.864 13:41:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@324 -- # [[ -n 1554893 ]] 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:10:42.864 13:41:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.864 13:41:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@200 -- # uname -s 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:10:42.864 13:41:42 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:10:42.864 13:41:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.864 13:41:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:43.124 13:41:42 json_config -- json_config/json_config.sh@330 -- # killprocess 1554893 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@954 -- # '[' -z 1554893 ']' 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@958 -- # kill -0 1554893 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@959 -- # uname 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1554893 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1554893' 00:10:43.124 killing process with pid 1554893 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@973 -- # kill 1554893 00:10:43.124 13:41:42 json_config -- common/autotest_common.sh@978 -- # wait 1554893 00:10:47.315 13:41:46 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:10:47.315 13:41:46 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:10:47.315 13:41:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.315 13:41:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:47.315 13:41:46 json_config -- json_config/json_config.sh@335 -- # return 0 00:10:47.315 13:41:46 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:10:47.315 INFO: Success 00:10:47.315 13:41:46 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:10:47.315 13:41:46 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.315 13:41:46 json_config -- nvmf/common.sh@121 -- # sync 00:10:47.315 13:41:46 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:10:47.315 13:41:46 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:10:47.315 13:41:46 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:47.315 13:41:46 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.315 13:41:46 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:10:47.315 00:10:47.315 real 0m26.160s 00:10:47.315 user 0m28.604s 00:10:47.315 sys 0m6.448s 00:10:47.315 13:41:46 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.315 13:41:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:47.315 ************************************ 00:10:47.315 END TEST json_config 00:10:47.315 ************************************ 00:10:47.315 13:41:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:47.315 13:41:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.315 13:41:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.315 13:41:46 -- common/autotest_common.sh@10 -- # set +x 00:10:47.316 ************************************ 00:10:47.316 START TEST json_config_extra_key 00:10:47.316 ************************************ 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:47.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.316 --rc genhtml_branch_coverage=1 00:10:47.316 --rc genhtml_function_coverage=1 00:10:47.316 --rc genhtml_legend=1 00:10:47.316 --rc geninfo_all_blocks=1 00:10:47.316 --rc geninfo_unexecuted_blocks=1 00:10:47.316 00:10:47.316 ' 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:47.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.316 --rc genhtml_branch_coverage=1 00:10:47.316 --rc genhtml_function_coverage=1 00:10:47.316 --rc genhtml_legend=1 00:10:47.316 --rc geninfo_all_blocks=1 00:10:47.316 --rc geninfo_unexecuted_blocks=1 00:10:47.316 00:10:47.316 ' 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:47.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.316 --rc genhtml_branch_coverage=1 00:10:47.316 --rc genhtml_function_coverage=1 00:10:47.316 --rc genhtml_legend=1 00:10:47.316 --rc geninfo_all_blocks=1 00:10:47.316 --rc geninfo_unexecuted_blocks=1 00:10:47.316 00:10:47.316 ' 00:10:47.316 13:41:46 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:47.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.316 --rc genhtml_branch_coverage=1 00:10:47.316 --rc genhtml_function_coverage=1 00:10:47.316 --rc genhtml_legend=1 00:10:47.316 --rc geninfo_all_blocks=1 00:10:47.316 --rc geninfo_unexecuted_blocks=1 00:10:47.316 00:10:47.316 ' 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.316 13:41:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.316 13:41:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.316 13:41:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.316 13:41:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.316 13:41:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:47.316 13:41:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.316 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.316 13:41:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:47.316 INFO: launching applications... 00:10:47.316 13:41:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:10:47.316 13:41:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:47.316 13:41:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:47.316 13:41:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:47.316 13:41:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:47.317 13:41:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:47.317 13:41:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:47.317 13:41:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:47.317 13:41:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1556850 00:10:47.317 13:41:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:47.317 Waiting for target to run... 00:10:47.317 13:41:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1556850 /var/tmp/spdk_tgt.sock 00:10:47.317 13:41:46 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1556850 ']' 00:10:47.317 13:41:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:10:47.317 13:41:46 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:47.317 13:41:46 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.317 13:41:46 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:47.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:47.317 13:41:46 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.317 13:41:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:47.317 [2024-12-05 13:41:46.994646] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:47.317 [2024-12-05 13:41:46.994692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556850 ] 00:10:47.575 [2024-12-05 13:41:47.280378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.575 [2024-12-05 13:41:47.292579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.144 13:41:47 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.144 13:41:47 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:48.144 00:10:48.144 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:48.144 INFO: shutting down applications... 00:10:48.144 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1556850 ]] 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1556850 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1556850 00:10:48.144 13:41:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:48.712 13:41:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:48.712 13:41:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:48.712 13:41:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1556850 00:10:48.712 13:41:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:48.712 13:41:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:48.712 13:41:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:48.712 13:41:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:48.712 SPDK target shutdown done 00:10:48.712 13:41:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:48.712 Success 00:10:48.712 00:10:48.712 real 0m1.524s 00:10:48.712 user 0m1.251s 00:10:48.712 sys 0m0.400s 00:10:48.712 13:41:48 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.712 13:41:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:48.712 ************************************ 00:10:48.712 END TEST json_config_extra_key 00:10:48.712 ************************************ 00:10:48.713 13:41:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:48.713 13:41:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.713 13:41:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.713 13:41:48 -- common/autotest_common.sh@10 -- # set +x 00:10:48.713 ************************************ 00:10:48.713 START TEST alias_rpc 00:10:48.713 ************************************ 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:48.713 * Looking for test storage... 00:10:48.713 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.713 13:41:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:48.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.713 --rc genhtml_branch_coverage=1 00:10:48.713 --rc genhtml_function_coverage=1 00:10:48.713 --rc genhtml_legend=1 00:10:48.713 --rc geninfo_all_blocks=1 00:10:48.713 --rc geninfo_unexecuted_blocks=1 00:10:48.713 00:10:48.713 ' 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:48.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.713 --rc genhtml_branch_coverage=1 00:10:48.713 --rc genhtml_function_coverage=1 00:10:48.713 --rc genhtml_legend=1 00:10:48.713 --rc geninfo_all_blocks=1 00:10:48.713 --rc geninfo_unexecuted_blocks=1 00:10:48.713 00:10:48.713 ' 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:48.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.713 --rc genhtml_branch_coverage=1 00:10:48.713 --rc genhtml_function_coverage=1 00:10:48.713 --rc genhtml_legend=1 00:10:48.713 --rc geninfo_all_blocks=1 00:10:48.713 --rc geninfo_unexecuted_blocks=1 00:10:48.713 00:10:48.713 ' 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:48.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.713 --rc genhtml_branch_coverage=1 00:10:48.713 --rc genhtml_function_coverage=1 00:10:48.713 --rc genhtml_legend=1 00:10:48.713 --rc geninfo_all_blocks=1 00:10:48.713 --rc geninfo_unexecuted_blocks=1 00:10:48.713 00:10:48.713 ' 00:10:48.713 13:41:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:48.713 13:41:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1557172 00:10:48.713 13:41:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1557172 00:10:48.713 13:41:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1557172 ']' 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.713 13:41:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.972 [2024-12-05 13:41:48.576496] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:48.972 [2024-12-05 13:41:48.576540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557172 ] 00:10:48.972 [2024-12-05 13:41:48.650414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.972 [2024-12-05 13:41:48.672065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.232 13:41:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.232 13:41:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:49.232 13:41:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:10:49.232 13:41:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1557172 00:10:49.232 13:41:49 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1557172 ']' 00:10:49.232 13:41:49 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1557172 00:10:49.232 13:41:49 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:49.232 13:41:49 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.232 13:41:49 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1557172 00:10:49.492 13:41:49 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.492 13:41:49 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.492 13:41:49 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1557172' 00:10:49.492 killing process with pid 1557172 00:10:49.492 13:41:49 alias_rpc -- common/autotest_common.sh@973 -- # kill 1557172 00:10:49.492 13:41:49 alias_rpc -- common/autotest_common.sh@978 -- # wait 1557172 00:10:49.768 00:10:49.768 real 0m1.059s 00:10:49.768 user 0m1.083s 00:10:49.768 sys 0m0.388s 00:10:49.768 13:41:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.768 13:41:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.768 ************************************ 00:10:49.768 END TEST alias_rpc 00:10:49.768 ************************************ 00:10:49.768 13:41:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:49.768 13:41:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:49.768 13:41:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.768 13:41:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.768 13:41:49 -- common/autotest_common.sh@10 -- # set +x 00:10:49.768 ************************************ 00:10:49.768 START TEST spdkcli_tcp 00:10:49.768 ************************************ 00:10:49.768 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:49.768 * Looking for test storage... 00:10:49.768 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:10:49.768 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.768 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.768 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.099 13:41:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:50.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.099 --rc genhtml_branch_coverage=1 00:10:50.099 --rc genhtml_function_coverage=1 00:10:50.099 --rc genhtml_legend=1 00:10:50.099 --rc geninfo_all_blocks=1 00:10:50.099 --rc geninfo_unexecuted_blocks=1 00:10:50.099 00:10:50.099 ' 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:50.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.099 --rc genhtml_branch_coverage=1 00:10:50.099 --rc genhtml_function_coverage=1 00:10:50.099 --rc genhtml_legend=1 00:10:50.099 --rc geninfo_all_blocks=1 00:10:50.099 --rc geninfo_unexecuted_blocks=1 00:10:50.099 00:10:50.099 ' 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:50.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.099 --rc genhtml_branch_coverage=1 00:10:50.099 --rc genhtml_function_coverage=1 00:10:50.099 --rc genhtml_legend=1 00:10:50.099 --rc geninfo_all_blocks=1 00:10:50.099 --rc geninfo_unexecuted_blocks=1 00:10:50.099 00:10:50.099 ' 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:50.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.099 --rc genhtml_branch_coverage=1 00:10:50.099 --rc genhtml_function_coverage=1 00:10:50.099 --rc genhtml_legend=1 00:10:50.099 --rc geninfo_all_blocks=1 00:10:50.099 --rc geninfo_unexecuted_blocks=1 00:10:50.099 00:10:50.099 ' 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1557499 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1557499 00:10:50.099 13:41:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1557499 ']' 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.099 13:41:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.099 [2024-12-05 13:41:49.714160] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:50.099 [2024-12-05 13:41:49.714201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557499 ] 00:10:50.099 [2024-12-05 13:41:49.785380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:50.099 [2024-12-05 13:41:49.807552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.099 [2024-12-05 13:41:49.807555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.388 13:41:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.388 13:41:50 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:50.388 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1557507 00:10:50.388 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:50.388 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:50.388 [ 00:10:50.388 "bdev_malloc_delete", 00:10:50.388 "bdev_malloc_create", 00:10:50.388 "bdev_null_resize", 00:10:50.388 "bdev_null_delete", 00:10:50.388 "bdev_null_create", 00:10:50.388 "bdev_nvme_cuse_unregister", 00:10:50.388 "bdev_nvme_cuse_register", 00:10:50.388 "bdev_opal_new_user", 00:10:50.388 "bdev_opal_set_lock_state", 00:10:50.388 "bdev_opal_delete", 00:10:50.388 "bdev_opal_get_info", 00:10:50.388 "bdev_opal_create", 00:10:50.388 "bdev_nvme_opal_revert", 00:10:50.388 "bdev_nvme_opal_init", 00:10:50.388 "bdev_nvme_send_cmd", 00:10:50.388 "bdev_nvme_set_keys", 00:10:50.388 "bdev_nvme_get_path_iostat", 00:10:50.388 "bdev_nvme_get_mdns_discovery_info", 00:10:50.388 "bdev_nvme_stop_mdns_discovery", 00:10:50.388 "bdev_nvme_start_mdns_discovery", 00:10:50.388 "bdev_nvme_set_multipath_policy", 00:10:50.388 "bdev_nvme_set_preferred_path", 00:10:50.388 "bdev_nvme_get_io_paths", 00:10:50.388 "bdev_nvme_remove_error_injection", 00:10:50.388 "bdev_nvme_add_error_injection", 00:10:50.388 "bdev_nvme_get_discovery_info", 00:10:50.388 "bdev_nvme_stop_discovery", 00:10:50.388 "bdev_nvme_start_discovery", 00:10:50.388 "bdev_nvme_get_controller_health_info", 00:10:50.388 "bdev_nvme_disable_controller", 00:10:50.388 "bdev_nvme_enable_controller", 00:10:50.388 "bdev_nvme_reset_controller", 00:10:50.388 "bdev_nvme_get_transport_statistics", 00:10:50.388 "bdev_nvme_apply_firmware", 00:10:50.388 "bdev_nvme_detach_controller", 00:10:50.388 "bdev_nvme_get_controllers", 00:10:50.388 "bdev_nvme_attach_controller", 00:10:50.388 "bdev_nvme_set_hotplug", 00:10:50.388 "bdev_nvme_set_options", 00:10:50.388 "bdev_passthru_delete", 00:10:50.388 "bdev_passthru_create", 00:10:50.388 "bdev_lvol_set_parent_bdev", 00:10:50.388 "bdev_lvol_set_parent", 00:10:50.388 "bdev_lvol_check_shallow_copy", 00:10:50.388 "bdev_lvol_start_shallow_copy", 00:10:50.388 "bdev_lvol_grow_lvstore", 00:10:50.388 "bdev_lvol_get_lvols", 00:10:50.388 "bdev_lvol_get_lvstores", 00:10:50.388 "bdev_lvol_delete", 00:10:50.388 "bdev_lvol_set_read_only", 00:10:50.388 "bdev_lvol_resize", 00:10:50.388 "bdev_lvol_decouple_parent", 00:10:50.388 "bdev_lvol_inflate", 00:10:50.388 "bdev_lvol_rename", 00:10:50.388 "bdev_lvol_clone_bdev", 00:10:50.388 "bdev_lvol_clone", 00:10:50.388 "bdev_lvol_snapshot", 00:10:50.388 "bdev_lvol_create", 00:10:50.388 "bdev_lvol_delete_lvstore", 00:10:50.388 "bdev_lvol_rename_lvstore", 00:10:50.388 "bdev_lvol_create_lvstore", 00:10:50.388 "bdev_raid_set_options", 00:10:50.388 "bdev_raid_remove_base_bdev", 00:10:50.388 "bdev_raid_add_base_bdev", 00:10:50.388 "bdev_raid_delete", 00:10:50.388 "bdev_raid_create", 00:10:50.388 "bdev_raid_get_bdevs", 00:10:50.388 "bdev_error_inject_error", 00:10:50.388 "bdev_error_delete", 00:10:50.388 "bdev_error_create", 00:10:50.388 "bdev_split_delete", 00:10:50.388 "bdev_split_create", 00:10:50.388 "bdev_delay_delete", 00:10:50.388 "bdev_delay_create", 00:10:50.388 "bdev_delay_update_latency", 00:10:50.388 "bdev_zone_block_delete", 00:10:50.388 "bdev_zone_block_create", 00:10:50.388 "blobfs_create", 00:10:50.388 "blobfs_detect", 00:10:50.388 "blobfs_set_cache_size", 00:10:50.388 "bdev_aio_delete", 00:10:50.388 "bdev_aio_rescan", 00:10:50.388 "bdev_aio_create", 00:10:50.388 "bdev_ftl_set_property", 00:10:50.388 "bdev_ftl_get_properties", 00:10:50.388 "bdev_ftl_get_stats", 00:10:50.388 "bdev_ftl_unmap", 00:10:50.388 "bdev_ftl_unload", 00:10:50.388 "bdev_ftl_delete", 00:10:50.388 "bdev_ftl_load", 00:10:50.388 "bdev_ftl_create", 00:10:50.388 "bdev_virtio_attach_controller", 00:10:50.388 "bdev_virtio_scsi_get_devices", 00:10:50.388 "bdev_virtio_detach_controller", 00:10:50.388 "bdev_virtio_blk_set_hotplug", 00:10:50.388 "bdev_iscsi_delete", 00:10:50.388 "bdev_iscsi_create", 00:10:50.388 "bdev_iscsi_set_options", 00:10:50.388 "accel_error_inject_error", 00:10:50.388 "ioat_scan_accel_module", 00:10:50.388 "dsa_scan_accel_module", 00:10:50.388 "iaa_scan_accel_module", 00:10:50.388 "keyring_file_remove_key", 00:10:50.388 "keyring_file_add_key", 00:10:50.388 "keyring_linux_set_options", 00:10:50.388 "fsdev_aio_delete", 00:10:50.388 "fsdev_aio_create", 00:10:50.388 "iscsi_get_histogram", 00:10:50.388 "iscsi_enable_histogram", 00:10:50.388 "iscsi_set_options", 00:10:50.388 "iscsi_get_auth_groups", 00:10:50.388 "iscsi_auth_group_remove_secret", 00:10:50.388 "iscsi_auth_group_add_secret", 00:10:50.388 "iscsi_delete_auth_group", 00:10:50.388 "iscsi_create_auth_group", 00:10:50.388 "iscsi_set_discovery_auth", 00:10:50.388 "iscsi_get_options", 00:10:50.388 "iscsi_target_node_request_logout", 00:10:50.388 "iscsi_target_node_set_redirect", 00:10:50.388 "iscsi_target_node_set_auth", 00:10:50.388 "iscsi_target_node_add_lun", 00:10:50.388 "iscsi_get_stats", 00:10:50.388 "iscsi_get_connections", 00:10:50.388 "iscsi_portal_group_set_auth", 00:10:50.388 "iscsi_start_portal_group", 00:10:50.388 "iscsi_delete_portal_group", 00:10:50.388 "iscsi_create_portal_group", 00:10:50.388 "iscsi_get_portal_groups", 00:10:50.388 "iscsi_delete_target_node", 00:10:50.388 "iscsi_target_node_remove_pg_ig_maps", 00:10:50.388 "iscsi_target_node_add_pg_ig_maps", 00:10:50.388 "iscsi_create_target_node", 00:10:50.388 "iscsi_get_target_nodes", 00:10:50.388 "iscsi_delete_initiator_group", 00:10:50.388 "iscsi_initiator_group_remove_initiators", 00:10:50.388 "iscsi_initiator_group_add_initiators", 00:10:50.388 "iscsi_create_initiator_group", 00:10:50.388 "iscsi_get_initiator_groups", 00:10:50.388 "nvmf_set_crdt", 00:10:50.388 "nvmf_set_config", 00:10:50.388 "nvmf_set_max_subsystems", 00:10:50.388 "nvmf_stop_mdns_prr", 00:10:50.388 "nvmf_publish_mdns_prr", 00:10:50.388 "nvmf_subsystem_get_listeners", 00:10:50.388 "nvmf_subsystem_get_qpairs", 00:10:50.388 "nvmf_subsystem_get_controllers", 00:10:50.388 "nvmf_get_stats", 00:10:50.388 "nvmf_get_transports", 00:10:50.388 "nvmf_create_transport", 00:10:50.388 "nvmf_get_targets", 00:10:50.388 "nvmf_delete_target", 00:10:50.388 "nvmf_create_target", 00:10:50.388 "nvmf_subsystem_allow_any_host", 00:10:50.388 "nvmf_subsystem_set_keys", 00:10:50.388 "nvmf_subsystem_remove_host", 00:10:50.388 "nvmf_subsystem_add_host", 00:10:50.388 "nvmf_ns_remove_host", 00:10:50.388 "nvmf_ns_add_host", 00:10:50.388 "nvmf_subsystem_remove_ns", 00:10:50.388 "nvmf_subsystem_set_ns_ana_group", 00:10:50.388 "nvmf_subsystem_add_ns", 00:10:50.388 "nvmf_subsystem_listener_set_ana_state", 00:10:50.388 "nvmf_discovery_get_referrals", 00:10:50.388 "nvmf_discovery_remove_referral", 00:10:50.388 "nvmf_discovery_add_referral", 00:10:50.388 "nvmf_subsystem_remove_listener", 00:10:50.388 "nvmf_subsystem_add_listener", 00:10:50.388 "nvmf_delete_subsystem", 00:10:50.388 "nvmf_create_subsystem", 00:10:50.388 "nvmf_get_subsystems", 00:10:50.388 "env_dpdk_get_mem_stats", 00:10:50.388 "nbd_get_disks", 00:10:50.388 "nbd_stop_disk", 00:10:50.388 "nbd_start_disk", 00:10:50.388 "ublk_recover_disk", 00:10:50.388 "ublk_get_disks", 00:10:50.388 "ublk_stop_disk", 00:10:50.388 "ublk_start_disk", 00:10:50.388 "ublk_destroy_target", 00:10:50.388 "ublk_create_target", 00:10:50.388 "virtio_blk_create_transport", 00:10:50.388 "virtio_blk_get_transports", 00:10:50.388 "vhost_controller_set_coalescing", 00:10:50.388 "vhost_get_controllers", 00:10:50.388 "vhost_delete_controller", 00:10:50.388 "vhost_create_blk_controller", 00:10:50.388 "vhost_scsi_controller_remove_target", 00:10:50.388 "vhost_scsi_controller_add_target", 00:10:50.388 "vhost_start_scsi_controller", 00:10:50.388 "vhost_create_scsi_controller", 00:10:50.388 "thread_set_cpumask", 00:10:50.388 "scheduler_set_options", 00:10:50.389 "framework_get_governor", 00:10:50.389 "framework_get_scheduler", 00:10:50.389 "framework_set_scheduler", 00:10:50.389 "framework_get_reactors", 00:10:50.389 "thread_get_io_channels", 00:10:50.389 "thread_get_pollers", 00:10:50.389 "thread_get_stats", 00:10:50.389 "framework_monitor_context_switch", 00:10:50.389 "spdk_kill_instance", 00:10:50.389 "log_enable_timestamps", 00:10:50.389 "log_get_flags", 00:10:50.389 "log_clear_flag", 00:10:50.389 "log_set_flag", 00:10:50.389 "log_get_level", 00:10:50.389 "log_set_level", 00:10:50.389 "log_get_print_level", 00:10:50.389 "log_set_print_level", 00:10:50.389 "framework_enable_cpumask_locks", 00:10:50.389 "framework_disable_cpumask_locks", 00:10:50.389 "framework_wait_init", 00:10:50.389 "framework_start_init", 00:10:50.389 "scsi_get_devices", 00:10:50.389 "bdev_get_histogram", 00:10:50.389 "bdev_enable_histogram", 00:10:50.389 "bdev_set_qos_limit", 00:10:50.389 "bdev_set_qd_sampling_period", 00:10:50.389 "bdev_get_bdevs", 00:10:50.389 "bdev_reset_iostat", 00:10:50.389 "bdev_get_iostat", 00:10:50.389 "bdev_examine", 00:10:50.389 "bdev_wait_for_examine", 00:10:50.389 "bdev_set_options", 00:10:50.389 "accel_get_stats", 00:10:50.389 "accel_set_options", 00:10:50.389 "accel_set_driver", 00:10:50.389 "accel_crypto_key_destroy", 00:10:50.389 "accel_crypto_keys_get", 00:10:50.389 "accel_crypto_key_create", 00:10:50.389 "accel_assign_opc", 00:10:50.389 "accel_get_module_info", 00:10:50.389 "accel_get_opc_assignments", 00:10:50.389 "vmd_rescan", 00:10:50.389 "vmd_remove_device", 00:10:50.389 "vmd_enable", 00:10:50.389 "sock_get_default_impl", 00:10:50.389 "sock_set_default_impl", 00:10:50.389 "sock_impl_set_options", 00:10:50.389 "sock_impl_get_options", 00:10:50.389 "iobuf_get_stats", 00:10:50.389 "iobuf_set_options", 00:10:50.389 "keyring_get_keys", 00:10:50.389 "framework_get_pci_devices", 00:10:50.389 "framework_get_config", 00:10:50.389 "framework_get_subsystems", 00:10:50.389 "fsdev_set_opts", 00:10:50.389 "fsdev_get_opts", 00:10:50.389 "trace_get_info", 00:10:50.389 "trace_get_tpoint_group_mask", 00:10:50.389 "trace_disable_tpoint_group", 00:10:50.389 "trace_enable_tpoint_group", 00:10:50.389 "trace_clear_tpoint_mask", 00:10:50.389 "trace_set_tpoint_mask", 00:10:50.389 "notify_get_notifications", 00:10:50.389 "notify_get_types", 00:10:50.389 "spdk_get_version", 00:10:50.389 "rpc_get_methods" 00:10:50.389 ] 00:10:50.389 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:50.389 13:41:50 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.389 13:41:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.389 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:50.389 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1557499 00:10:50.389 13:41:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1557499 ']' 00:10:50.389 13:41:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1557499 00:10:50.389 13:41:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:50.389 13:41:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.389 13:41:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1557499 00:10:50.649 13:41:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.649 13:41:50 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.649 13:41:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1557499' 00:10:50.649 killing process with pid 1557499 00:10:50.649 13:41:50 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1557499 00:10:50.649 13:41:50 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1557499 00:10:50.909 00:10:50.909 real 0m1.081s 00:10:50.909 user 0m1.812s 00:10:50.909 sys 0m0.448s 00:10:50.909 13:41:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.909 13:41:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.909 ************************************ 00:10:50.909 END TEST spdkcli_tcp 00:10:50.909 ************************************ 00:10:50.909 13:41:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:50.909 13:41:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.909 13:41:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.909 13:41:50 -- common/autotest_common.sh@10 -- # set +x 00:10:50.909 ************************************ 00:10:50.909 START TEST dpdk_mem_utility 00:10:50.909 ************************************ 00:10:50.909 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:50.909 * Looking for test storage... 00:10:50.909 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:10:50.909 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:50.909 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:10:50.909 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.167 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.167 13:41:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:51.167 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.167 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.167 --rc genhtml_branch_coverage=1 00:10:51.168 --rc genhtml_function_coverage=1 00:10:51.168 --rc genhtml_legend=1 00:10:51.168 --rc geninfo_all_blocks=1 00:10:51.168 --rc geninfo_unexecuted_blocks=1 00:10:51.168 00:10:51.168 ' 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.168 --rc genhtml_branch_coverage=1 00:10:51.168 --rc genhtml_function_coverage=1 00:10:51.168 --rc genhtml_legend=1 00:10:51.168 --rc geninfo_all_blocks=1 00:10:51.168 --rc geninfo_unexecuted_blocks=1 00:10:51.168 00:10:51.168 ' 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.168 --rc genhtml_branch_coverage=1 00:10:51.168 --rc genhtml_function_coverage=1 00:10:51.168 --rc genhtml_legend=1 00:10:51.168 --rc geninfo_all_blocks=1 00:10:51.168 --rc geninfo_unexecuted_blocks=1 00:10:51.168 00:10:51.168 ' 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.168 --rc genhtml_branch_coverage=1 00:10:51.168 --rc genhtml_function_coverage=1 00:10:51.168 --rc genhtml_legend=1 00:10:51.168 --rc geninfo_all_blocks=1 00:10:51.168 --rc geninfo_unexecuted_blocks=1 00:10:51.168 00:10:51.168 ' 00:10:51.168 13:41:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:51.168 13:41:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1557771 00:10:51.168 13:41:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1557771 00:10:51.168 13:41:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1557771 ']' 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.168 13:41:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:51.168 [2024-12-05 13:41:50.870505] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:51.168 [2024-12-05 13:41:50.870555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557771 ] 00:10:51.168 [2024-12-05 13:41:50.942615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.168 [2024-12-05 13:41:50.964289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.426 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:51.427 13:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:51.427 13:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:51.427 { 00:10:51.427 "filename": "/tmp/spdk_mem_dump.txt" 00:10:51.427 } 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.427 13:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:51.427 DPDK memory size 818.000000 MiB in 1 heap(s) 00:10:51.427 1 heaps totaling size 818.000000 MiB 00:10:51.427 size: 818.000000 MiB heap id: 0 00:10:51.427 end heaps---------- 00:10:51.427 9 mempools totaling size 603.782043 MiB 00:10:51.427 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:51.427 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:51.427 size: 100.555481 MiB name: bdev_io_1557771 00:10:51.427 size: 50.003479 MiB name: msgpool_1557771 00:10:51.427 size: 36.509338 MiB name: fsdev_io_1557771 00:10:51.427 size: 21.763794 MiB name: PDU_Pool 00:10:51.427 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:51.427 size: 4.133484 MiB name: evtpool_1557771 00:10:51.427 size: 0.026123 MiB name: Session_Pool 00:10:51.427 end mempools------- 00:10:51.427 6 memzones totaling size 4.142822 MiB 00:10:51.427 size: 1.000366 MiB name: RG_ring_0_1557771 00:10:51.427 size: 1.000366 MiB name: RG_ring_1_1557771 00:10:51.427 size: 1.000366 MiB name: RG_ring_4_1557771 00:10:51.427 size: 1.000366 MiB name: RG_ring_5_1557771 00:10:51.427 size: 0.125366 MiB name: RG_ring_2_1557771 00:10:51.427 size: 0.015991 MiB name: RG_ring_3_1557771 00:10:51.427 end memzones------- 00:10:51.427 13:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:10:51.427 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:10:51.427 list of free elements. size: 10.852478 MiB 00:10:51.427 element at address: 0x200019200000 with size: 0.999878 MiB 00:10:51.427 element at address: 0x200019400000 with size: 0.999878 MiB 00:10:51.427 element at address: 0x200000400000 with size: 0.998535 MiB 00:10:51.427 element at address: 0x200032000000 with size: 0.994446 MiB 00:10:51.427 element at address: 0x200006400000 with size: 0.959839 MiB 00:10:51.427 element at address: 0x200012c00000 with size: 0.944275 MiB 00:10:51.427 element at address: 0x200019600000 with size: 0.936584 MiB 00:10:51.427 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:51.427 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:10:51.427 element at address: 0x200000c00000 with size: 0.495422 MiB 00:10:51.427 element at address: 0x20000a600000 with size: 0.490723 MiB 00:10:51.427 element at address: 0x200019800000 with size: 0.485657 MiB 00:10:51.427 element at address: 0x200003e00000 with size: 0.481934 MiB 00:10:51.427 element at address: 0x200028200000 with size: 0.410034 MiB 00:10:51.427 element at address: 0x200000800000 with size: 0.355042 MiB 00:10:51.427 list of standard malloc elements. size: 199.218628 MiB 00:10:51.427 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:10:51.427 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:10:51.427 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:51.427 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:10:51.427 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:10:51.427 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:51.427 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:10:51.427 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:51.427 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:10:51.427 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20000085b040 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20000085f300 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20000087f680 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200003efb980 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:10:51.427 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200028268f80 with size: 0.000183 MiB 00:10:51.427 element at address: 0x200028269040 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:10:51.427 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:10:51.427 list of memzone associated elements. size: 607.928894 MiB 00:10:51.427 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:10:51.427 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:51.427 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:10:51.427 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:51.427 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:10:51.427 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1557771_0 00:10:51.427 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:51.427 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1557771_0 00:10:51.427 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:10:51.427 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1557771_0 00:10:51.427 element at address: 0x2000199be940 with size: 20.255554 MiB 00:10:51.427 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:51.427 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:10:51.427 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:51.427 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:51.427 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1557771_0 00:10:51.427 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:51.427 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1557771 00:10:51.427 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:51.427 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1557771 00:10:51.427 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:10:51.427 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:51.427 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:10:51.427 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:51.427 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:10:51.427 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:51.427 element at address: 0x200003efba40 with size: 1.008118 MiB 00:10:51.427 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:51.427 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:51.427 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1557771 00:10:51.427 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:51.427 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1557771 00:10:51.427 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:10:51.427 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1557771 00:10:51.427 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:10:51.427 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1557771 00:10:51.427 element at address: 0x20000087f740 with size: 0.500488 MiB 00:10:51.427 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1557771 00:10:51.427 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:51.427 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1557771 00:10:51.427 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:10:51.427 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:51.427 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:10:51.427 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:51.427 element at address: 0x20001987c540 with size: 0.250488 MiB 00:10:51.427 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:51.427 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:51.427 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1557771 00:10:51.427 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:10:51.427 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1557771 00:10:51.427 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:10:51.427 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:51.427 element at address: 0x200028269100 with size: 0.023743 MiB 00:10:51.427 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:51.427 element at address: 0x20000085b100 with size: 0.016113 MiB 00:10:51.427 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1557771 00:10:51.427 element at address: 0x20002826f240 with size: 0.002441 MiB 00:10:51.427 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:51.427 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:10:51.427 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1557771 00:10:51.427 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:10:51.427 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1557771 00:10:51.427 element at address: 0x20000085af00 with size: 0.000305 MiB 00:10:51.427 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1557771 00:10:51.427 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:10:51.427 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:51.427 13:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:51.427 13:41:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1557771 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1557771 ']' 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1557771 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.427 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1557771 00:10:51.685 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.686 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.686 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1557771' 00:10:51.686 killing process with pid 1557771 00:10:51.686 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1557771 00:10:51.686 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1557771 00:10:51.944 00:10:51.944 real 0m0.947s 00:10:51.944 user 0m0.868s 00:10:51.944 sys 0m0.405s 00:10:51.944 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.944 13:41:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 ************************************ 00:10:51.944 END TEST dpdk_mem_utility 00:10:51.944 ************************************ 00:10:51.944 13:41:51 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:10:51.944 13:41:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.944 13:41:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.944 13:41:51 -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 ************************************ 00:10:51.944 START TEST event 00:10:51.944 ************************************ 00:10:51.944 13:41:51 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:10:51.944 * Looking for test storage... 00:10:51.944 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:10:51.944 13:41:51 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.944 13:41:51 event -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.944 13:41:51 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.202 13:41:51 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.202 13:41:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.202 13:41:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.202 13:41:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.202 13:41:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.202 13:41:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.202 13:41:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.202 13:41:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.202 13:41:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.202 13:41:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.202 13:41:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.202 13:41:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.202 13:41:51 event -- scripts/common.sh@344 -- # case "$op" in 00:10:52.202 13:41:51 event -- scripts/common.sh@345 -- # : 1 00:10:52.202 13:41:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.202 13:41:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.202 13:41:51 event -- scripts/common.sh@365 -- # decimal 1 00:10:52.202 13:41:51 event -- scripts/common.sh@353 -- # local d=1 00:10:52.202 13:41:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.202 13:41:51 event -- scripts/common.sh@355 -- # echo 1 00:10:52.202 13:41:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.202 13:41:51 event -- scripts/common.sh@366 -- # decimal 2 00:10:52.202 13:41:51 event -- scripts/common.sh@353 -- # local d=2 00:10:52.202 13:41:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.202 13:41:51 event -- scripts/common.sh@355 -- # echo 2 00:10:52.202 13:41:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.202 13:41:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.202 13:41:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.202 13:41:51 event -- scripts/common.sh@368 -- # return 0 00:10:52.202 13:41:51 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.202 13:41:51 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.202 --rc genhtml_branch_coverage=1 00:10:52.202 --rc genhtml_function_coverage=1 00:10:52.202 --rc genhtml_legend=1 00:10:52.202 --rc geninfo_all_blocks=1 00:10:52.202 --rc geninfo_unexecuted_blocks=1 00:10:52.202 00:10:52.202 ' 00:10:52.202 13:41:51 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.202 --rc genhtml_branch_coverage=1 00:10:52.202 --rc genhtml_function_coverage=1 00:10:52.202 --rc genhtml_legend=1 00:10:52.202 --rc geninfo_all_blocks=1 00:10:52.202 --rc geninfo_unexecuted_blocks=1 00:10:52.202 00:10:52.202 ' 00:10:52.202 13:41:51 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.202 --rc genhtml_branch_coverage=1 00:10:52.202 --rc genhtml_function_coverage=1 00:10:52.202 --rc genhtml_legend=1 00:10:52.202 --rc geninfo_all_blocks=1 00:10:52.202 --rc geninfo_unexecuted_blocks=1 00:10:52.202 00:10:52.202 ' 00:10:52.202 13:41:51 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.202 --rc genhtml_branch_coverage=1 00:10:52.202 --rc genhtml_function_coverage=1 00:10:52.202 --rc genhtml_legend=1 00:10:52.202 --rc geninfo_all_blocks=1 00:10:52.202 --rc geninfo_unexecuted_blocks=1 00:10:52.202 00:10:52.202 ' 00:10:52.203 13:41:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:52.203 13:41:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:52.203 13:41:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:52.203 13:41:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:52.203 13:41:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.203 13:41:51 event -- common/autotest_common.sh@10 -- # set +x 00:10:52.203 ************************************ 00:10:52.203 START TEST event_perf 00:10:52.203 ************************************ 00:10:52.203 13:41:51 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:52.203 Running I/O for 1 seconds...[2024-12-05 13:41:51.883401] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:52.203 [2024-12-05 13:41:51.883468] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557917 ] 00:10:52.203 [2024-12-05 13:41:51.964288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.203 [2024-12-05 13:41:51.989301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.203 [2024-12-05 13:41:51.989424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.203 [2024-12-05 13:41:51.989469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.203 [2024-12-05 13:41:51.989470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.575 Running I/O for 1 seconds... 00:10:53.575 lcore 0: 220211 00:10:53.575 lcore 1: 220211 00:10:53.575 lcore 2: 220210 00:10:53.575 lcore 3: 220210 00:10:53.575 done. 00:10:53.575 00:10:53.575 real 0m1.161s 00:10:53.575 user 0m4.070s 00:10:53.575 sys 0m0.088s 00:10:53.575 13:41:53 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.575 13:41:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:53.575 ************************************ 00:10:53.575 END TEST event_perf 00:10:53.575 ************************************ 00:10:53.575 13:41:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:53.575 13:41:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.575 13:41:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.575 13:41:53 event -- common/autotest_common.sh@10 -- # set +x 00:10:53.575 ************************************ 00:10:53.575 START TEST event_reactor 00:10:53.575 ************************************ 00:10:53.575 13:41:53 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:53.575 [2024-12-05 13:41:53.116207] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:53.575 [2024-12-05 13:41:53.116274] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558199 ] 00:10:53.575 [2024-12-05 13:41:53.193622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.575 [2024-12-05 13:41:53.214572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.509 test_start 00:10:54.509 oneshot 00:10:54.509 tick 100 00:10:54.509 tick 100 00:10:54.509 tick 250 00:10:54.509 tick 100 00:10:54.509 tick 100 00:10:54.509 tick 100 00:10:54.509 tick 250 00:10:54.509 tick 500 00:10:54.509 tick 100 00:10:54.509 tick 100 00:10:54.509 tick 250 00:10:54.509 tick 100 00:10:54.509 tick 100 00:10:54.509 test_end 00:10:54.509 00:10:54.509 real 0m1.148s 00:10:54.509 user 0m1.067s 00:10:54.509 sys 0m0.077s 00:10:54.509 13:41:54 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.509 13:41:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:54.509 ************************************ 00:10:54.509 END TEST event_reactor 00:10:54.509 ************************************ 00:10:54.509 13:41:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:54.509 13:41:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:54.509 13:41:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.509 13:41:54 event -- common/autotest_common.sh@10 -- # set +x 00:10:54.509 ************************************ 00:10:54.509 START TEST event_reactor_perf 00:10:54.509 ************************************ 00:10:54.509 13:41:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:54.509 [2024-12-05 13:41:54.336599] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:54.509 [2024-12-05 13:41:54.336662] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558479 ] 00:10:54.768 [2024-12-05 13:41:54.414065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.768 [2024-12-05 13:41:54.434708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.704 test_start 00:10:55.704 test_end 00:10:55.704 Performance: 555273 events per second 00:10:55.704 00:10:55.704 real 0m1.151s 00:10:55.704 user 0m1.071s 00:10:55.704 sys 0m0.077s 00:10:55.704 13:41:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.704 13:41:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:55.704 ************************************ 00:10:55.704 END TEST event_reactor_perf 00:10:55.704 ************************************ 00:10:55.704 13:41:55 event -- event/event.sh@49 -- # uname -s 00:10:55.704 13:41:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:55.704 13:41:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:55.704 13:41:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.704 13:41:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.704 13:41:55 event -- common/autotest_common.sh@10 -- # set +x 00:10:55.704 ************************************ 00:10:55.704 START TEST event_scheduler 00:10:55.704 ************************************ 00:10:55.704 13:41:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:55.964 * Looking for test storage... 00:10:55.964 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.964 13:41:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.964 --rc genhtml_branch_coverage=1 00:10:55.964 --rc genhtml_function_coverage=1 00:10:55.964 --rc genhtml_legend=1 00:10:55.964 --rc geninfo_all_blocks=1 00:10:55.964 --rc geninfo_unexecuted_blocks=1 00:10:55.964 00:10:55.964 ' 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.964 --rc genhtml_branch_coverage=1 00:10:55.964 --rc genhtml_function_coverage=1 00:10:55.964 --rc genhtml_legend=1 00:10:55.964 --rc geninfo_all_blocks=1 00:10:55.964 --rc geninfo_unexecuted_blocks=1 00:10:55.964 00:10:55.964 ' 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.964 --rc genhtml_branch_coverage=1 00:10:55.964 --rc genhtml_function_coverage=1 00:10:55.964 --rc genhtml_legend=1 00:10:55.964 --rc geninfo_all_blocks=1 00:10:55.964 --rc geninfo_unexecuted_blocks=1 00:10:55.964 00:10:55.964 ' 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.964 --rc genhtml_branch_coverage=1 00:10:55.964 --rc genhtml_function_coverage=1 00:10:55.964 --rc genhtml_legend=1 00:10:55.964 --rc geninfo_all_blocks=1 00:10:55.964 --rc geninfo_unexecuted_blocks=1 00:10:55.964 00:10:55.964 ' 00:10:55.964 13:41:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:55.964 13:41:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1558792 00:10:55.964 13:41:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:55.964 13:41:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:55.964 13:41:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1558792 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1558792 ']' 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.964 13:41:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:55.964 [2024-12-05 13:41:55.753706] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:10:55.964 [2024-12-05 13:41:55.753749] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558792 ] 00:10:56.223 [2024-12-05 13:41:55.829621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.223 [2024-12-05 13:41:55.854461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.223 [2024-12-05 13:41:55.854572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.223 [2024-12-05 13:41:55.854653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.223 [2024-12-05 13:41:55.854653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:56.223 13:41:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:56.223 [2024-12-05 13:41:55.895218] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:10:56.223 [2024-12-05 13:41:55.895233] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:56.223 [2024-12-05 13:41:55.895241] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:56.223 [2024-12-05 13:41:55.895246] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:56.223 [2024-12-05 13:41:55.895250] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.223 13:41:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:56.223 [2024-12-05 13:41:55.964543] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.223 13:41:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.223 13:41:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:56.223 ************************************ 00:10:56.223 START TEST scheduler_create_thread 00:10:56.224 ************************************ 00:10:56.224 13:41:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.224 2 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.224 3 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.224 4 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.224 5 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.224 6 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.224 7 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.224 8 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.224 9 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:56.224 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.482 10 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:56.482 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.483 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.741 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.741 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:57.000 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.000 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:58.376 13:41:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.376 13:41:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:58.376 13:41:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:58.376 13:41:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.376 13:41:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.312 13:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.312 00:10:59.312 real 0m3.100s 00:10:59.312 user 0m0.024s 00:10:59.312 sys 0m0.005s 00:10:59.312 13:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.312 13:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.312 ************************************ 00:10:59.312 END TEST scheduler_create_thread 00:10:59.312 ************************************ 00:10:59.312 13:41:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:59.312 13:41:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1558792 00:10:59.312 13:41:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1558792 ']' 00:10:59.312 13:41:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1558792 00:10:59.312 13:41:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:59.312 13:41:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.312 13:41:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1558792 00:10:59.571 13:41:59 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:59.571 13:41:59 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:59.571 13:41:59 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1558792' 00:10:59.571 killing process with pid 1558792 00:10:59.571 13:41:59 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1558792 00:10:59.571 13:41:59 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1558792 00:10:59.831 [2024-12-05 13:41:59.479470] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:59.831 00:10:59.831 real 0m4.114s 00:10:59.831 user 0m6.568s 00:10:59.831 sys 0m0.379s 00:10:59.831 13:41:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.831 13:41:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:59.831 ************************************ 00:10:59.831 END TEST event_scheduler 00:10:59.831 ************************************ 00:11:00.090 13:41:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:00.090 13:41:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:00.090 13:41:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:00.090 13:41:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.090 13:41:59 event -- common/autotest_common.sh@10 -- # set +x 00:11:00.090 ************************************ 00:11:00.090 START TEST app_repeat 00:11:00.090 ************************************ 00:11:00.090 13:41:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1559626 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1559626' 00:11:00.090 Process app_repeat pid: 1559626 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:00.090 spdk_app_start Round 0 00:11:00.090 13:41:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1559626 /var/tmp/spdk-nbd.sock 00:11:00.090 13:41:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1559626 ']' 00:11:00.090 13:41:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:00.090 13:41:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.090 13:41:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:00.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:00.090 13:41:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.090 13:41:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:00.090 [2024-12-05 13:41:59.770046] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:00.090 [2024-12-05 13:41:59.770115] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1559626 ] 00:11:00.090 [2024-12-05 13:41:59.845401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.090 [2024-12-05 13:41:59.866292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.090 [2024-12-05 13:41:59.866292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.349 13:41:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.349 13:41:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:00.349 13:41:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:00.349 Malloc0 00:11:00.349 13:42:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:00.608 Malloc1 00:11:00.608 13:42:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:00.608 13:42:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:00.867 /dev/nbd0 00:11:00.867 13:42:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:00.867 13:42:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:00.867 1+0 records in 00:11:00.867 1+0 records out 00:11:00.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234226 s, 17.5 MB/s 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:00.867 13:42:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:00.867 13:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.867 13:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:00.867 13:42:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:01.146 /dev/nbd1 00:11:01.146 13:42:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:01.146 13:42:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:01.146 1+0 records in 00:11:01.146 1+0 records out 00:11:01.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227131 s, 18.0 MB/s 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:01.146 13:42:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:01.146 13:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:01.146 13:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:01.146 13:42:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:01.146 13:42:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.146 13:42:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:01.405 { 00:11:01.405 "nbd_device": "/dev/nbd0", 00:11:01.405 "bdev_name": "Malloc0" 00:11:01.405 }, 00:11:01.405 { 00:11:01.405 "nbd_device": "/dev/nbd1", 00:11:01.405 "bdev_name": "Malloc1" 00:11:01.405 } 00:11:01.405 ]' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:01.405 { 00:11:01.405 "nbd_device": "/dev/nbd0", 00:11:01.405 "bdev_name": "Malloc0" 00:11:01.405 }, 00:11:01.405 { 00:11:01.405 "nbd_device": "/dev/nbd1", 00:11:01.405 "bdev_name": "Malloc1" 00:11:01.405 } 00:11:01.405 ]' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:01.405 /dev/nbd1' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:01.405 /dev/nbd1' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:01.405 256+0 records in 00:11:01.405 256+0 records out 00:11:01.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105905 s, 99.0 MB/s 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:01.405 256+0 records in 00:11:01.405 256+0 records out 00:11:01.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130299 s, 80.5 MB/s 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:01.405 256+0 records in 00:11:01.405 256+0 records out 00:11:01.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137551 s, 76.2 MB/s 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.405 13:42:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.663 13:42:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:01.922 13:42:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:01.922 13:42:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:02.181 13:42:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:02.440 [2024-12-05 13:42:02.103116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:02.440 [2024-12-05 13:42:02.121727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.440 [2024-12-05 13:42:02.121727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.440 [2024-12-05 13:42:02.161731] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:02.440 [2024-12-05 13:42:02.161774] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:05.751 13:42:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:05.751 13:42:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:05.751 spdk_app_start Round 1 00:11:05.751 13:42:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1559626 /var/tmp/spdk-nbd.sock 00:11:05.751 13:42:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1559626 ']' 00:11:05.751 13:42:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:05.751 13:42:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.751 13:42:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:05.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:05.751 13:42:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.751 13:42:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:05.751 13:42:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.751 13:42:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:05.751 13:42:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:05.751 Malloc0 00:11:05.751 13:42:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:05.751 Malloc1 00:11:05.751 13:42:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.751 13:42:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:06.010 /dev/nbd0 00:11:06.010 13:42:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:06.010 13:42:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:06.010 1+0 records in 00:11:06.010 1+0 records out 00:11:06.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186018 s, 22.0 MB/s 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:06.010 13:42:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:06.010 13:42:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.010 13:42:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.010 13:42:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:06.269 /dev/nbd1 00:11:06.269 13:42:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:06.269 13:42:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:06.269 13:42:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:06.269 13:42:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:06.269 13:42:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:06.269 13:42:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:06.269 13:42:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:06.269 13:42:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:06.269 13:42:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:06.269 13:42:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:06.270 13:42:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:06.270 1+0 records in 00:11:06.270 1+0 records out 00:11:06.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241604 s, 17.0 MB/s 00:11:06.270 13:42:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:06.270 13:42:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:06.270 13:42:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:06.270 13:42:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:06.270 13:42:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:06.270 13:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.270 13:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.270 13:42:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:06.270 13:42:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.270 13:42:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:06.529 { 00:11:06.529 "nbd_device": "/dev/nbd0", 00:11:06.529 "bdev_name": "Malloc0" 00:11:06.529 }, 00:11:06.529 { 00:11:06.529 "nbd_device": "/dev/nbd1", 00:11:06.529 "bdev_name": "Malloc1" 00:11:06.529 } 00:11:06.529 ]' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:06.529 { 00:11:06.529 "nbd_device": "/dev/nbd0", 00:11:06.529 "bdev_name": "Malloc0" 00:11:06.529 }, 00:11:06.529 { 00:11:06.529 "nbd_device": "/dev/nbd1", 00:11:06.529 "bdev_name": "Malloc1" 00:11:06.529 } 00:11:06.529 ]' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:06.529 /dev/nbd1' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:06.529 /dev/nbd1' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:06.529 256+0 records in 00:11:06.529 256+0 records out 00:11:06.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106618 s, 98.3 MB/s 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:06.529 256+0 records in 00:11:06.529 256+0 records out 00:11:06.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127057 s, 82.5 MB/s 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:06.529 256+0 records in 00:11:06.529 256+0 records out 00:11:06.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134594 s, 77.9 MB/s 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:06.529 13:42:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:06.530 13:42:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.530 13:42:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.530 13:42:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:06.530 13:42:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:06.530 13:42:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.530 13:42:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.814 13:42:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:07.073 13:42:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:07.331 13:42:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:07.331 13:42:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:07.331 13:42:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:07.589 [2024-12-05 13:42:07.285510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:07.589 [2024-12-05 13:42:07.304099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.589 [2024-12-05 13:42:07.304099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.589 [2024-12-05 13:42:07.344060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:07.589 [2024-12-05 13:42:07.344104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:10.869 13:42:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:10.869 13:42:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:10.869 spdk_app_start Round 2 00:11:10.869 13:42:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1559626 /var/tmp/spdk-nbd.sock 00:11:10.869 13:42:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1559626 ']' 00:11:10.869 13:42:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:10.869 13:42:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.869 13:42:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:10.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:10.869 13:42:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.869 13:42:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:10.869 13:42:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.869 13:42:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:10.869 13:42:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:10.869 Malloc0 00:11:10.869 13:42:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:10.869 Malloc1 00:11:11.128 13:42:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:11.128 /dev/nbd0 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:11.128 13:42:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:11.128 13:42:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:11.387 1+0 records in 00:11:11.387 1+0 records out 00:11:11.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237705 s, 17.2 MB/s 00:11:11.387 13:42:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:11.387 13:42:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:11.387 13:42:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:11.387 13:42:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:11.387 13:42:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:11.387 13:42:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.387 13:42:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.387 13:42:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:11.387 /dev/nbd1 00:11:11.387 13:42:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:11.387 13:42:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:11.387 1+0 records in 00:11:11.387 1+0 records out 00:11:11.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236826 s, 17.3 MB/s 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:11.387 13:42:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:11.387 13:42:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.387 13:42:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.387 13:42:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:11.387 13:42:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.387 13:42:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:11.646 { 00:11:11.646 "nbd_device": "/dev/nbd0", 00:11:11.646 "bdev_name": "Malloc0" 00:11:11.646 }, 00:11:11.646 { 00:11:11.646 "nbd_device": "/dev/nbd1", 00:11:11.646 "bdev_name": "Malloc1" 00:11:11.646 } 00:11:11.646 ]' 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:11.646 { 00:11:11.646 "nbd_device": "/dev/nbd0", 00:11:11.646 "bdev_name": "Malloc0" 00:11:11.646 }, 00:11:11.646 { 00:11:11.646 "nbd_device": "/dev/nbd1", 00:11:11.646 "bdev_name": "Malloc1" 00:11:11.646 } 00:11:11.646 ]' 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:11.646 /dev/nbd1' 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:11.646 /dev/nbd1' 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:11.646 256+0 records in 00:11:11.646 256+0 records out 00:11:11.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00993672 s, 106 MB/s 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:11.646 256+0 records in 00:11:11.646 256+0 records out 00:11:11.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129155 s, 81.2 MB/s 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:11.646 13:42:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:11.905 256+0 records in 00:11:11.905 256+0 records out 00:11:11.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136935 s, 76.6 MB/s 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.905 13:42:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.164 13:42:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:12.423 13:42:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:12.423 13:42:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:12.682 13:42:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:12.682 [2024-12-05 13:42:12.489022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:12.682 [2024-12-05 13:42:12.507846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.682 [2024-12-05 13:42:12.507848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.940 [2024-12-05 13:42:12.547957] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:12.940 [2024-12-05 13:42:12.547995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:16.234 13:42:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1559626 /var/tmp/spdk-nbd.sock 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1559626 ']' 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:16.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:16.234 13:42:15 event.app_repeat -- event/event.sh@39 -- # killprocess 1559626 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1559626 ']' 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1559626 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1559626 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1559626' 00:11:16.234 killing process with pid 1559626 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1559626 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1559626 00:11:16.234 spdk_app_start is called in Round 0. 00:11:16.234 Shutdown signal received, stop current app iteration 00:11:16.234 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 reinitialization... 00:11:16.234 spdk_app_start is called in Round 1. 00:11:16.234 Shutdown signal received, stop current app iteration 00:11:16.234 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 reinitialization... 00:11:16.234 spdk_app_start is called in Round 2. 00:11:16.234 Shutdown signal received, stop current app iteration 00:11:16.234 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 reinitialization... 00:11:16.234 spdk_app_start is called in Round 3. 00:11:16.234 Shutdown signal received, stop current app iteration 00:11:16.234 13:42:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:16.234 13:42:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:16.234 00:11:16.234 real 0m15.999s 00:11:16.234 user 0m35.057s 00:11:16.234 sys 0m2.491s 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.234 13:42:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:16.234 ************************************ 00:11:16.234 END TEST app_repeat 00:11:16.234 ************************************ 00:11:16.234 13:42:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:16.234 13:42:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:16.234 13:42:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.234 13:42:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.234 13:42:15 event -- common/autotest_common.sh@10 -- # set +x 00:11:16.234 ************************************ 00:11:16.234 START TEST cpu_locks 00:11:16.234 ************************************ 00:11:16.234 13:42:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:16.234 * Looking for test storage... 00:11:16.234 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:11:16.234 13:42:15 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.234 13:42:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.234 13:42:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.234 13:42:15 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.234 13:42:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:16.234 13:42:15 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.234 13:42:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.235 --rc genhtml_branch_coverage=1 00:11:16.235 --rc genhtml_function_coverage=1 00:11:16.235 --rc genhtml_legend=1 00:11:16.235 --rc geninfo_all_blocks=1 00:11:16.235 --rc geninfo_unexecuted_blocks=1 00:11:16.235 00:11:16.235 ' 00:11:16.235 13:42:15 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.235 --rc genhtml_branch_coverage=1 00:11:16.235 --rc genhtml_function_coverage=1 00:11:16.235 --rc genhtml_legend=1 00:11:16.235 --rc geninfo_all_blocks=1 00:11:16.235 --rc geninfo_unexecuted_blocks=1 00:11:16.235 00:11:16.235 ' 00:11:16.235 13:42:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.235 --rc genhtml_branch_coverage=1 00:11:16.235 --rc genhtml_function_coverage=1 00:11:16.235 --rc genhtml_legend=1 00:11:16.235 --rc geninfo_all_blocks=1 00:11:16.235 --rc geninfo_unexecuted_blocks=1 00:11:16.235 00:11:16.235 ' 00:11:16.235 13:42:15 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.235 --rc genhtml_branch_coverage=1 00:11:16.235 --rc genhtml_function_coverage=1 00:11:16.235 --rc genhtml_legend=1 00:11:16.235 --rc geninfo_all_blocks=1 00:11:16.235 --rc geninfo_unexecuted_blocks=1 00:11:16.235 00:11:16.235 ' 00:11:16.235 13:42:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:16.235 13:42:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:16.235 13:42:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:16.235 13:42:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:16.235 13:42:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.235 13:42:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.235 13:42:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:16.235 ************************************ 00:11:16.235 START TEST default_locks 00:11:16.235 ************************************ 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1563386 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1563386 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1563386 ']' 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.235 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:16.235 [2024-12-05 13:42:16.053908] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:16.235 [2024-12-05 13:42:16.053945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563386 ] 00:11:16.492 [2024-12-05 13:42:16.124604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.492 [2024-12-05 13:42:16.146244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.492 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.492 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:16.492 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1563386 00:11:16.492 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1563386 00:11:16.492 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:16.749 lslocks: write error 00:11:16.749 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1563386 00:11:16.749 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1563386 ']' 00:11:16.749 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1563386 00:11:16.749 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:16.749 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.749 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1563386 00:11:17.007 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.007 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.007 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1563386' 00:11:17.007 killing process with pid 1563386 00:11:17.007 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1563386 00:11:17.007 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1563386 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1563386 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1563386 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1563386 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1563386 ']' 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:17.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1563386) - No such process 00:11:17.264 ERROR: process (pid: 1563386) is no longer running 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:17.264 13:42:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:17.264 00:11:17.264 real 0m0.923s 00:11:17.264 user 0m0.862s 00:11:17.265 sys 0m0.456s 00:11:17.265 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.265 13:42:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:17.265 ************************************ 00:11:17.265 END TEST default_locks 00:11:17.265 ************************************ 00:11:17.265 13:42:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:17.265 13:42:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.265 13:42:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.265 13:42:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:17.265 ************************************ 00:11:17.265 START TEST default_locks_via_rpc 00:11:17.265 ************************************ 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1563674 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1563674 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1563674 ']' 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.265 13:42:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.265 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.265 [2024-12-05 13:42:17.047508] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:17.265 [2024-12-05 13:42:17.047550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563674 ] 00:11:17.523 [2024-12-05 13:42:17.121508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.523 [2024-12-05 13:42:17.143090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1563674 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1563674 00:11:17.523 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1563674 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1563674 ']' 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1563674 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1563674 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1563674' 00:11:18.110 killing process with pid 1563674 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1563674 00:11:18.110 13:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1563674 00:11:18.368 00:11:18.368 real 0m1.190s 00:11:18.368 user 0m1.131s 00:11:18.368 sys 0m0.560s 00:11:18.368 13:42:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.368 13:42:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.368 ************************************ 00:11:18.368 END TEST default_locks_via_rpc 00:11:18.368 ************************************ 00:11:18.625 13:42:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:18.625 13:42:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.625 13:42:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.625 13:42:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:18.625 ************************************ 00:11:18.625 START TEST non_locking_app_on_locked_coremask 00:11:18.625 ************************************ 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1563960 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1563960 /var/tmp/spdk.sock 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1563960 ']' 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.625 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:18.625 [2024-12-05 13:42:18.306556] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:18.626 [2024-12-05 13:42:18.306595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563960 ] 00:11:18.626 [2024-12-05 13:42:18.379361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.626 [2024-12-05 13:42:18.399334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1563971 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1563971 /var/tmp/spdk2.sock 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1563971 ']' 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:18.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.883 13:42:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:18.883 [2024-12-05 13:42:18.650154] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:18.883 [2024-12-05 13:42:18.650193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563971 ] 00:11:18.883 [2024-12-05 13:42:18.733484] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:18.883 [2024-12-05 13:42:18.733512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.140 [2024-12-05 13:42:18.774658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.707 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.707 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:19.707 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1563960 00:11:19.707 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1563960 00:11:19.707 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:20.274 lslocks: write error 00:11:20.274 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1563960 00:11:20.274 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1563960 ']' 00:11:20.274 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1563960 00:11:20.274 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:20.274 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.274 13:42:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1563960 00:11:20.274 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.274 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.274 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1563960' 00:11:20.274 killing process with pid 1563960 00:11:20.274 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1563960 00:11:20.274 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1563960 00:11:20.842 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1563971 00:11:20.842 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1563971 ']' 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1563971 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1563971 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1563971' 00:11:20.843 killing process with pid 1563971 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1563971 00:11:20.843 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1563971 00:11:21.411 00:11:21.411 real 0m2.705s 00:11:21.411 user 0m2.822s 00:11:21.411 sys 0m0.908s 00:11:21.411 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.411 13:42:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:21.411 ************************************ 00:11:21.411 END TEST non_locking_app_on_locked_coremask 00:11:21.411 ************************************ 00:11:21.411 13:42:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:21.411 13:42:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.411 13:42:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.411 13:42:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:21.411 ************************************ 00:11:21.411 START TEST locking_app_on_unlocked_coremask 00:11:21.411 ************************************ 00:11:21.411 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:21.411 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1564493 00:11:21.411 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1564493 /var/tmp/spdk.sock 00:11:21.411 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:21.412 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1564493 ']' 00:11:21.412 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.412 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.412 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.412 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.412 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:21.412 [2024-12-05 13:42:21.079773] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:21.412 [2024-12-05 13:42:21.079816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564493 ] 00:11:21.412 [2024-12-05 13:42:21.155021] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:21.412 [2024-12-05 13:42:21.155046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.412 [2024-12-05 13:42:21.176474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1564529 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1564529 /var/tmp/spdk2.sock 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1564529 ']' 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:21.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.670 13:42:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:21.670 [2024-12-05 13:42:21.422139] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:21.670 [2024-12-05 13:42:21.422186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564529 ] 00:11:21.670 [2024-12-05 13:42:21.505051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.929 [2024-12-05 13:42:21.546402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.496 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.496 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:22.496 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1564529 00:11:22.496 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1564529 00:11:22.496 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:23.064 lslocks: write error 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1564493 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1564493 ']' 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1564493 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1564493 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1564493' 00:11:23.064 killing process with pid 1564493 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1564493 00:11:23.064 13:42:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1564493 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1564529 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1564529 ']' 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1564529 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1564529 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1564529' 00:11:23.631 killing process with pid 1564529 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1564529 00:11:23.631 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1564529 00:11:24.199 00:11:24.199 real 0m2.720s 00:11:24.199 user 0m2.852s 00:11:24.199 sys 0m0.907s 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:24.199 ************************************ 00:11:24.199 END TEST locking_app_on_unlocked_coremask 00:11:24.199 ************************************ 00:11:24.199 13:42:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:24.199 13:42:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.199 13:42:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.199 13:42:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:24.199 ************************************ 00:11:24.199 START TEST locking_app_on_locked_coremask 00:11:24.199 ************************************ 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1564997 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1564997 /var/tmp/spdk.sock 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1564997 ']' 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.199 13:42:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:24.199 [2024-12-05 13:42:23.867293] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:24.199 [2024-12-05 13:42:23.867333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564997 ] 00:11:24.199 [2024-12-05 13:42:23.938715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.199 [2024-12-05 13:42:23.960117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.456 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.456 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1565095 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1565095 /var/tmp/spdk2.sock 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1565095 /var/tmp/spdk2.sock 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1565095 /var/tmp/spdk2.sock 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1565095 ']' 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:24.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.457 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:24.457 [2024-12-05 13:42:24.221316] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:24.457 [2024-12-05 13:42:24.221359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565095 ] 00:11:24.457 [2024-12-05 13:42:24.304661] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1564997 has claimed it. 00:11:24.457 [2024-12-05 13:42:24.304713] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:25.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1565095) - No such process 00:11:25.120 ERROR: process (pid: 1565095) is no longer running 00:11:25.120 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.120 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:25.120 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:25.120 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:25.120 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:25.120 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:25.120 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1564997 00:11:25.121 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1564997 00:11:25.121 13:42:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:25.385 lslocks: write error 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1564997 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1564997 ']' 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1564997 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1564997 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1564997' 00:11:25.385 killing process with pid 1564997 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1564997 00:11:25.385 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1564997 00:11:25.645 00:11:25.645 real 0m1.541s 00:11:25.645 user 0m1.621s 00:11:25.645 sys 0m0.521s 00:11:25.645 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.645 13:42:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:25.645 ************************************ 00:11:25.645 END TEST locking_app_on_locked_coremask 00:11:25.645 ************************************ 00:11:25.645 13:42:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:25.645 13:42:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:25.645 13:42:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.645 13:42:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:25.645 ************************************ 00:11:25.645 START TEST locking_overlapped_coremask 00:11:25.645 ************************************ 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1565390 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1565390 /var/tmp/spdk.sock 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1565390 ']' 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.645 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:25.645 [2024-12-05 13:42:25.480552] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:25.645 [2024-12-05 13:42:25.480590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565390 ] 00:11:25.903 [2024-12-05 13:42:25.552822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:25.903 [2024-12-05 13:42:25.576780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.903 [2024-12-05 13:42:25.576888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.903 [2024-12-05 13:42:25.576890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1565401 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1565401 /var/tmp/spdk2.sock 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1565401 /var/tmp/spdk2.sock 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1565401 /var/tmp/spdk2.sock 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1565401 ']' 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:26.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.161 13:42:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:26.161 [2024-12-05 13:42:25.815906] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:26.161 [2024-12-05 13:42:25.815943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565401 ] 00:11:26.161 [2024-12-05 13:42:25.900331] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1565390 has claimed it. 00:11:26.161 [2024-12-05 13:42:25.900365] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:26.727 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1565401) - No such process 00:11:26.727 ERROR: process (pid: 1565401) is no longer running 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1565390 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1565390 ']' 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1565390 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1565390 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1565390' 00:11:26.727 killing process with pid 1565390 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1565390 00:11:26.727 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1565390 00:11:26.985 00:11:26.985 real 0m1.352s 00:11:26.985 user 0m3.734s 00:11:26.985 sys 0m0.395s 00:11:26.985 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.985 13:42:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:26.985 ************************************ 00:11:26.985 END TEST locking_overlapped_coremask 00:11:26.985 ************************************ 00:11:26.985 13:42:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:26.985 13:42:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.985 13:42:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.985 13:42:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:27.243 ************************************ 00:11:27.243 START TEST locking_overlapped_coremask_via_rpc 00:11:27.243 ************************************ 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1565686 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1565686 /var/tmp/spdk.sock 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1565686 ']' 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.243 13:42:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.243 [2024-12-05 13:42:26.904156] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:27.243 [2024-12-05 13:42:26.904194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565686 ] 00:11:27.243 [2024-12-05 13:42:26.978104] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:27.243 [2024-12-05 13:42:26.978129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.243 [2024-12-05 13:42:27.000257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.243 [2024-12-05 13:42:27.000363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.243 [2024-12-05 13:42:27.000364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1565694 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1565694 /var/tmp/spdk2.sock 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1565694 ']' 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:27.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.502 13:42:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.502 [2024-12-05 13:42:27.248498] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:27.502 [2024-12-05 13:42:27.248538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1565694 ] 00:11:27.502 [2024-12-05 13:42:27.333394] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:27.502 [2024-12-05 13:42:27.333423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.760 [2024-12-05 13:42:27.380978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.760 [2024-12-05 13:42:27.381088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.760 [2024-12-05 13:42:27.381091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.327 [2024-12-05 13:42:28.073447] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1565686 has claimed it. 00:11:28.327 request: 00:11:28.327 { 00:11:28.327 "method": "framework_enable_cpumask_locks", 00:11:28.327 "req_id": 1 00:11:28.327 } 00:11:28.327 Got JSON-RPC error response 00:11:28.327 response: 00:11:28.327 { 00:11:28.327 "code": -32603, 00:11:28.327 "message": "Failed to claim CPU core: 2" 00:11:28.327 } 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1565686 /var/tmp/spdk.sock 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1565686 ']' 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.327 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1565694 /var/tmp/spdk2.sock 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1565694 ']' 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:28.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.586 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.845 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.845 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:28.845 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:28.845 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:28.845 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:28.845 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:28.845 00:11:28.845 real 0m1.620s 00:11:28.845 user 0m0.759s 00:11:28.845 sys 0m0.123s 00:11:28.845 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.845 13:42:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.845 ************************************ 00:11:28.845 END TEST locking_overlapped_coremask_via_rpc 00:11:28.845 ************************************ 00:11:28.845 13:42:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:28.845 13:42:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1565686 ]] 00:11:28.845 13:42:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1565686 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1565686 ']' 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1565686 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1565686 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1565686' 00:11:28.845 killing process with pid 1565686 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1565686 00:11:28.845 13:42:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1565686 00:11:29.104 13:42:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1565694 ]] 00:11:29.104 13:42:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1565694 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1565694 ']' 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1565694 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1565694 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1565694' 00:11:29.104 killing process with pid 1565694 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1565694 00:11:29.104 13:42:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1565694 00:11:29.363 13:42:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:29.363 13:42:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:29.363 13:42:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1565686 ]] 00:11:29.363 13:42:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1565686 00:11:29.363 13:42:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1565686 ']' 00:11:29.363 13:42:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1565686 00:11:29.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1565686) - No such process 00:11:29.363 13:42:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1565686 is not found' 00:11:29.363 Process with pid 1565686 is not found 00:11:29.363 13:42:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1565694 ]] 00:11:29.363 13:42:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1565694 00:11:29.363 13:42:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1565694 ']' 00:11:29.363 13:42:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1565694 00:11:29.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1565694) - No such process 00:11:29.363 13:42:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1565694 is not found' 00:11:29.363 Process with pid 1565694 is not found 00:11:29.363 13:42:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:29.363 00:11:29.363 real 0m13.411s 00:11:29.363 user 0m23.201s 00:11:29.363 sys 0m4.850s 00:11:29.363 13:42:29 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.364 13:42:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.364 ************************************ 00:11:29.364 END TEST cpu_locks 00:11:29.364 ************************************ 00:11:29.622 00:11:29.622 real 0m37.590s 00:11:29.622 user 1m11.308s 00:11:29.622 sys 0m8.334s 00:11:29.622 13:42:29 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.622 13:42:29 event -- common/autotest_common.sh@10 -- # set +x 00:11:29.622 ************************************ 00:11:29.622 END TEST event 00:11:29.622 ************************************ 00:11:29.622 13:42:29 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:11:29.622 13:42:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.622 13:42:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.622 13:42:29 -- common/autotest_common.sh@10 -- # set +x 00:11:29.622 ************************************ 00:11:29.622 START TEST thread 00:11:29.622 ************************************ 00:11:29.622 13:42:29 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:11:29.622 * Looking for test storage... 00:11:29.622 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:11:29.622 13:42:29 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.622 13:42:29 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.622 13:42:29 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.881 13:42:29 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.881 13:42:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.881 13:42:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.881 13:42:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.881 13:42:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.881 13:42:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.881 13:42:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.881 13:42:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.881 13:42:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.881 13:42:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.881 13:42:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.881 13:42:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.881 13:42:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:29.881 13:42:29 thread -- scripts/common.sh@345 -- # : 1 00:11:29.881 13:42:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.881 13:42:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.881 13:42:29 thread -- scripts/common.sh@365 -- # decimal 1 00:11:29.881 13:42:29 thread -- scripts/common.sh@353 -- # local d=1 00:11:29.881 13:42:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.881 13:42:29 thread -- scripts/common.sh@355 -- # echo 1 00:11:29.881 13:42:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.881 13:42:29 thread -- scripts/common.sh@366 -- # decimal 2 00:11:29.881 13:42:29 thread -- scripts/common.sh@353 -- # local d=2 00:11:29.881 13:42:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.881 13:42:29 thread -- scripts/common.sh@355 -- # echo 2 00:11:29.881 13:42:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.881 13:42:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.881 13:42:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.881 13:42:29 thread -- scripts/common.sh@368 -- # return 0 00:11:29.881 13:42:29 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.881 13:42:29 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.881 --rc genhtml_branch_coverage=1 00:11:29.881 --rc genhtml_function_coverage=1 00:11:29.881 --rc genhtml_legend=1 00:11:29.881 --rc geninfo_all_blocks=1 00:11:29.881 --rc geninfo_unexecuted_blocks=1 00:11:29.881 00:11:29.881 ' 00:11:29.881 13:42:29 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.882 --rc genhtml_branch_coverage=1 00:11:29.882 --rc genhtml_function_coverage=1 00:11:29.882 --rc genhtml_legend=1 00:11:29.882 --rc geninfo_all_blocks=1 00:11:29.882 --rc geninfo_unexecuted_blocks=1 00:11:29.882 00:11:29.882 ' 00:11:29.882 13:42:29 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.882 --rc genhtml_branch_coverage=1 00:11:29.882 --rc genhtml_function_coverage=1 00:11:29.882 --rc genhtml_legend=1 00:11:29.882 --rc geninfo_all_blocks=1 00:11:29.882 --rc geninfo_unexecuted_blocks=1 00:11:29.882 00:11:29.882 ' 00:11:29.882 13:42:29 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.882 --rc genhtml_branch_coverage=1 00:11:29.882 --rc genhtml_function_coverage=1 00:11:29.882 --rc genhtml_legend=1 00:11:29.882 --rc geninfo_all_blocks=1 00:11:29.882 --rc geninfo_unexecuted_blocks=1 00:11:29.882 00:11:29.882 ' 00:11:29.882 13:42:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:29.882 13:42:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:29.882 13:42:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.882 13:42:29 thread -- common/autotest_common.sh@10 -- # set +x 00:11:29.882 ************************************ 00:11:29.882 START TEST thread_poller_perf 00:11:29.882 ************************************ 00:11:29.882 13:42:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:29.882 [2024-12-05 13:42:29.552099] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:29.882 [2024-12-05 13:42:29.552168] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566190 ] 00:11:29.882 [2024-12-05 13:42:29.631476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.882 [2024-12-05 13:42:29.652328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.882 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:31.259 [2024-12-05T12:42:31.112Z] ====================================== 00:11:31.259 [2024-12-05T12:42:31.112Z] busy:2706136920 (cyc) 00:11:31.259 [2024-12-05T12:42:31.112Z] total_run_count: 437000 00:11:31.259 [2024-12-05T12:42:31.112Z] tsc_hz: 2700000000 (cyc) 00:11:31.259 [2024-12-05T12:42:31.112Z] ====================================== 00:11:31.259 [2024-12-05T12:42:31.112Z] poller_cost: 6192 (cyc), 2293 (nsec) 00:11:31.259 00:11:31.259 real 0m1.161s 00:11:31.259 user 0m1.080s 00:11:31.259 sys 0m0.077s 00:11:31.259 13:42:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.259 13:42:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:31.259 ************************************ 00:11:31.259 END TEST thread_poller_perf 00:11:31.259 ************************************ 00:11:31.259 13:42:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:31.259 13:42:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:31.259 13:42:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.259 13:42:30 thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.259 ************************************ 00:11:31.259 START TEST thread_poller_perf 00:11:31.259 ************************************ 00:11:31.259 13:42:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:31.259 [2024-12-05 13:42:30.783971] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:31.259 [2024-12-05 13:42:30.784038] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566365 ] 00:11:31.259 [2024-12-05 13:42:30.865493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.259 [2024-12-05 13:42:30.888489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.259 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:32.197 [2024-12-05T12:42:32.050Z] ====================================== 00:11:32.197 [2024-12-05T12:42:32.050Z] busy:2701615288 (cyc) 00:11:32.197 [2024-12-05T12:42:32.050Z] total_run_count: 5993000 00:11:32.197 [2024-12-05T12:42:32.050Z] tsc_hz: 2700000000 (cyc) 00:11:32.197 [2024-12-05T12:42:32.050Z] ====================================== 00:11:32.197 [2024-12-05T12:42:32.050Z] poller_cost: 450 (cyc), 166 (nsec) 00:11:32.197 00:11:32.197 real 0m1.159s 00:11:32.197 user 0m1.077s 00:11:32.197 sys 0m0.077s 00:11:32.197 13:42:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.197 13:42:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:32.197 ************************************ 00:11:32.197 END TEST thread_poller_perf 00:11:32.197 ************************************ 00:11:32.197 13:42:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:32.197 00:11:32.197 real 0m2.635s 00:11:32.197 user 0m2.313s 00:11:32.197 sys 0m0.336s 00:11:32.197 13:42:31 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.197 13:42:31 thread -- common/autotest_common.sh@10 -- # set +x 00:11:32.197 ************************************ 00:11:32.197 END TEST thread 00:11:32.197 ************************************ 00:11:32.197 13:42:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:32.197 13:42:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:11:32.197 13:42:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.197 13:42:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.197 13:42:31 -- common/autotest_common.sh@10 -- # set +x 00:11:32.197 ************************************ 00:11:32.197 START TEST app_cmdline 00:11:32.197 ************************************ 00:11:32.197 13:42:32 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:11:32.457 * Looking for test storage... 00:11:32.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:11:32.457 13:42:32 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:32.457 13:42:32 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.457 13:42:32 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:32.457 13:42:32 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.457 13:42:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:32.457 13:42:32 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.457 13:42:32 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:32.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.457 --rc genhtml_branch_coverage=1 00:11:32.457 --rc genhtml_function_coverage=1 00:11:32.457 --rc genhtml_legend=1 00:11:32.457 --rc geninfo_all_blocks=1 00:11:32.457 --rc geninfo_unexecuted_blocks=1 00:11:32.457 00:11:32.457 ' 00:11:32.457 13:42:32 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:32.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.457 --rc genhtml_branch_coverage=1 00:11:32.457 --rc genhtml_function_coverage=1 00:11:32.457 --rc genhtml_legend=1 00:11:32.457 --rc geninfo_all_blocks=1 00:11:32.457 --rc geninfo_unexecuted_blocks=1 00:11:32.457 00:11:32.457 ' 00:11:32.457 13:42:32 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:32.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.457 --rc genhtml_branch_coverage=1 00:11:32.457 --rc genhtml_function_coverage=1 00:11:32.457 --rc genhtml_legend=1 00:11:32.457 --rc geninfo_all_blocks=1 00:11:32.457 --rc geninfo_unexecuted_blocks=1 00:11:32.457 00:11:32.457 ' 00:11:32.458 13:42:32 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:32.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.458 --rc genhtml_branch_coverage=1 00:11:32.458 --rc genhtml_function_coverage=1 00:11:32.458 --rc genhtml_legend=1 00:11:32.458 --rc geninfo_all_blocks=1 00:11:32.458 --rc geninfo_unexecuted_blocks=1 00:11:32.458 00:11:32.458 ' 00:11:32.458 13:42:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:32.458 13:42:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1566692 00:11:32.458 13:42:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1566692 00:11:32.458 13:42:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:32.458 13:42:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1566692 ']' 00:11:32.458 13:42:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.458 13:42:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.458 13:42:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.458 13:42:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.458 13:42:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:32.458 [2024-12-05 13:42:32.243418] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:32.458 [2024-12-05 13:42:32.243466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566692 ] 00:11:32.715 [2024-12-05 13:42:32.314604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.716 [2024-12-05 13:42:32.335951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.716 13:42:32 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.716 13:42:32 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:32.716 13:42:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:32.974 { 00:11:32.974 "version": "SPDK v25.01-pre git sha1 8d3947977", 00:11:32.974 "fields": { 00:11:32.974 "major": 25, 00:11:32.974 "minor": 1, 00:11:32.974 "patch": 0, 00:11:32.974 "suffix": "-pre", 00:11:32.974 "commit": "8d3947977" 00:11:32.974 } 00:11:32.974 } 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:32.974 13:42:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:32.974 13:42:32 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:33.234 request: 00:11:33.234 { 00:11:33.234 "method": "env_dpdk_get_mem_stats", 00:11:33.234 "req_id": 1 00:11:33.234 } 00:11:33.234 Got JSON-RPC error response 00:11:33.234 response: 00:11:33.234 { 00:11:33.234 "code": -32601, 00:11:33.235 "message": "Method not found" 00:11:33.235 } 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.235 13:42:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1566692 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1566692 ']' 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1566692 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566692 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566692' 00:11:33.235 killing process with pid 1566692 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@973 -- # kill 1566692 00:11:33.235 13:42:32 app_cmdline -- common/autotest_common.sh@978 -- # wait 1566692 00:11:33.494 00:11:33.494 real 0m1.244s 00:11:33.494 user 0m1.385s 00:11:33.494 sys 0m0.469s 00:11:33.494 13:42:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.494 13:42:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:33.494 ************************************ 00:11:33.494 END TEST app_cmdline 00:11:33.494 ************************************ 00:11:33.494 13:42:33 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:11:33.494 13:42:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.494 13:42:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.494 13:42:33 -- common/autotest_common.sh@10 -- # set +x 00:11:33.494 ************************************ 00:11:33.494 START TEST version 00:11:33.494 ************************************ 00:11:33.494 13:42:33 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:11:33.753 * Looking for test storage... 00:11:33.753 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.753 13:42:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.753 13:42:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.753 13:42:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.753 13:42:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.753 13:42:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.753 13:42:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.753 13:42:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.753 13:42:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.753 13:42:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.753 13:42:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.753 13:42:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.753 13:42:33 version -- scripts/common.sh@344 -- # case "$op" in 00:11:33.753 13:42:33 version -- scripts/common.sh@345 -- # : 1 00:11:33.753 13:42:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.753 13:42:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.753 13:42:33 version -- scripts/common.sh@365 -- # decimal 1 00:11:33.753 13:42:33 version -- scripts/common.sh@353 -- # local d=1 00:11:33.753 13:42:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.753 13:42:33 version -- scripts/common.sh@355 -- # echo 1 00:11:33.753 13:42:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.753 13:42:33 version -- scripts/common.sh@366 -- # decimal 2 00:11:33.753 13:42:33 version -- scripts/common.sh@353 -- # local d=2 00:11:33.753 13:42:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.753 13:42:33 version -- scripts/common.sh@355 -- # echo 2 00:11:33.753 13:42:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.753 13:42:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.753 13:42:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.753 13:42:33 version -- scripts/common.sh@368 -- # return 0 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.753 --rc genhtml_branch_coverage=1 00:11:33.753 --rc genhtml_function_coverage=1 00:11:33.753 --rc genhtml_legend=1 00:11:33.753 --rc geninfo_all_blocks=1 00:11:33.753 --rc geninfo_unexecuted_blocks=1 00:11:33.753 00:11:33.753 ' 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.753 --rc genhtml_branch_coverage=1 00:11:33.753 --rc genhtml_function_coverage=1 00:11:33.753 --rc genhtml_legend=1 00:11:33.753 --rc geninfo_all_blocks=1 00:11:33.753 --rc geninfo_unexecuted_blocks=1 00:11:33.753 00:11:33.753 ' 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.753 --rc genhtml_branch_coverage=1 00:11:33.753 --rc genhtml_function_coverage=1 00:11:33.753 --rc genhtml_legend=1 00:11:33.753 --rc geninfo_all_blocks=1 00:11:33.753 --rc geninfo_unexecuted_blocks=1 00:11:33.753 00:11:33.753 ' 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.753 --rc genhtml_branch_coverage=1 00:11:33.753 --rc genhtml_function_coverage=1 00:11:33.753 --rc genhtml_legend=1 00:11:33.753 --rc geninfo_all_blocks=1 00:11:33.753 --rc geninfo_unexecuted_blocks=1 00:11:33.753 00:11:33.753 ' 00:11:33.753 13:42:33 version -- app/version.sh@17 -- # get_header_version major 00:11:33.753 13:42:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:11:33.753 13:42:33 version -- app/version.sh@14 -- # cut -f2 00:11:33.753 13:42:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:33.753 13:42:33 version -- app/version.sh@17 -- # major=25 00:11:33.753 13:42:33 version -- app/version.sh@18 -- # get_header_version minor 00:11:33.753 13:42:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:11:33.753 13:42:33 version -- app/version.sh@14 -- # cut -f2 00:11:33.753 13:42:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:33.753 13:42:33 version -- app/version.sh@18 -- # minor=1 00:11:33.753 13:42:33 version -- app/version.sh@19 -- # get_header_version patch 00:11:33.753 13:42:33 version -- app/version.sh@14 -- # cut -f2 00:11:33.753 13:42:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:11:33.753 13:42:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:33.753 13:42:33 version -- app/version.sh@19 -- # patch=0 00:11:33.753 13:42:33 version -- app/version.sh@20 -- # get_header_version suffix 00:11:33.753 13:42:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:11:33.753 13:42:33 version -- app/version.sh@14 -- # cut -f2 00:11:33.753 13:42:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:33.753 13:42:33 version -- app/version.sh@20 -- # suffix=-pre 00:11:33.753 13:42:33 version -- app/version.sh@22 -- # version=25.1 00:11:33.753 13:42:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:33.753 13:42:33 version -- app/version.sh@28 -- # version=25.1rc0 00:11:33.753 13:42:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:33.753 13:42:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:33.753 13:42:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:33.753 13:42:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:33.753 00:11:33.753 real 0m0.240s 00:11:33.753 user 0m0.145s 00:11:33.753 sys 0m0.137s 00:11:33.753 13:42:33 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.753 13:42:33 version -- common/autotest_common.sh@10 -- # set +x 00:11:33.753 ************************************ 00:11:33.753 END TEST version 00:11:33.753 ************************************ 00:11:34.011 13:42:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:34.011 13:42:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:34.011 13:42:33 -- spdk/autotest.sh@194 -- # uname -s 00:11:34.011 13:42:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:34.012 13:42:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:34.012 13:42:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:34.012 13:42:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:34.012 13:42:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:34.012 13:42:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:34.012 13:42:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.012 13:42:33 -- common/autotest_common.sh@10 -- # set +x 00:11:34.012 13:42:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:11:34.012 13:42:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:11:34.012 13:42:33 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:11:34.012 13:42:33 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:11:34.012 13:42:33 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:11:34.012 13:42:33 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:11:34.012 13:42:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.012 13:42:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.012 13:42:33 -- common/autotest_common.sh@10 -- # set +x 00:11:34.012 ************************************ 00:11:34.012 START TEST nvmf_rdma 00:11:34.012 ************************************ 00:11:34.012 13:42:33 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:11:34.012 * Looking for test storage... 00:11:34.012 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:34.012 13:42:33 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.012 13:42:33 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.012 13:42:33 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.012 13:42:33 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.012 13:42:33 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:11:34.271 13:42:33 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.271 13:42:33 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.271 --rc genhtml_branch_coverage=1 00:11:34.271 --rc genhtml_function_coverage=1 00:11:34.271 --rc genhtml_legend=1 00:11:34.271 --rc geninfo_all_blocks=1 00:11:34.271 --rc geninfo_unexecuted_blocks=1 00:11:34.271 00:11:34.271 ' 00:11:34.272 13:42:33 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.272 --rc genhtml_branch_coverage=1 00:11:34.272 --rc genhtml_function_coverage=1 00:11:34.272 --rc genhtml_legend=1 00:11:34.272 --rc geninfo_all_blocks=1 00:11:34.272 --rc geninfo_unexecuted_blocks=1 00:11:34.272 00:11:34.272 ' 00:11:34.272 13:42:33 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.272 --rc genhtml_branch_coverage=1 00:11:34.272 --rc genhtml_function_coverage=1 00:11:34.272 --rc genhtml_legend=1 00:11:34.272 --rc geninfo_all_blocks=1 00:11:34.272 --rc geninfo_unexecuted_blocks=1 00:11:34.272 00:11:34.272 ' 00:11:34.272 13:42:33 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.272 --rc genhtml_branch_coverage=1 00:11:34.272 --rc genhtml_function_coverage=1 00:11:34.272 --rc genhtml_legend=1 00:11:34.272 --rc geninfo_all_blocks=1 00:11:34.272 --rc geninfo_unexecuted_blocks=1 00:11:34.272 00:11:34.272 ' 00:11:34.272 13:42:33 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:11:34.272 13:42:33 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:34.272 13:42:33 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:11:34.272 13:42:33 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.272 13:42:33 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.272 13:42:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:34.272 ************************************ 00:11:34.272 START TEST nvmf_target_core 00:11:34.272 ************************************ 00:11:34.272 13:42:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:11:34.272 * Looking for test storage... 00:11:34.272 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:34.272 13:42:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.272 13:42:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.272 13:42:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.272 --rc genhtml_branch_coverage=1 00:11:34.272 --rc genhtml_function_coverage=1 00:11:34.272 --rc genhtml_legend=1 00:11:34.272 --rc geninfo_all_blocks=1 00:11:34.272 --rc geninfo_unexecuted_blocks=1 00:11:34.272 00:11:34.272 ' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.272 --rc genhtml_branch_coverage=1 00:11:34.272 --rc genhtml_function_coverage=1 00:11:34.272 --rc genhtml_legend=1 00:11:34.272 --rc geninfo_all_blocks=1 00:11:34.272 --rc geninfo_unexecuted_blocks=1 00:11:34.272 00:11:34.272 ' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.272 --rc genhtml_branch_coverage=1 00:11:34.272 --rc genhtml_function_coverage=1 00:11:34.272 --rc genhtml_legend=1 00:11:34.272 --rc geninfo_all_blocks=1 00:11:34.272 --rc geninfo_unexecuted_blocks=1 00:11:34.272 00:11:34.272 ' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.272 --rc genhtml_branch_coverage=1 00:11:34.272 --rc genhtml_function_coverage=1 00:11:34.272 --rc genhtml_legend=1 00:11:34.272 --rc geninfo_all_blocks=1 00:11:34.272 --rc geninfo_unexecuted_blocks=1 00:11:34.272 00:11:34.272 ' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.272 13:42:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.273 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.273 13:42:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:34.532 ************************************ 00:11:34.532 START TEST nvmf_abort 00:11:34.532 ************************************ 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:11:34.532 * Looking for test storage... 00:11:34.532 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.532 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.533 --rc genhtml_branch_coverage=1 00:11:34.533 --rc genhtml_function_coverage=1 00:11:34.533 --rc genhtml_legend=1 00:11:34.533 --rc geninfo_all_blocks=1 00:11:34.533 --rc geninfo_unexecuted_blocks=1 00:11:34.533 00:11:34.533 ' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.533 --rc genhtml_branch_coverage=1 00:11:34.533 --rc genhtml_function_coverage=1 00:11:34.533 --rc genhtml_legend=1 00:11:34.533 --rc geninfo_all_blocks=1 00:11:34.533 --rc geninfo_unexecuted_blocks=1 00:11:34.533 00:11:34.533 ' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.533 --rc genhtml_branch_coverage=1 00:11:34.533 --rc genhtml_function_coverage=1 00:11:34.533 --rc genhtml_legend=1 00:11:34.533 --rc geninfo_all_blocks=1 00:11:34.533 --rc geninfo_unexecuted_blocks=1 00:11:34.533 00:11:34.533 ' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.533 --rc genhtml_branch_coverage=1 00:11:34.533 --rc genhtml_function_coverage=1 00:11:34.533 --rc genhtml_legend=1 00:11:34.533 --rc geninfo_all_blocks=1 00:11:34.533 --rc geninfo_unexecuted_blocks=1 00:11:34.533 00:11:34.533 ' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.533 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.533 13:42:34 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:41.099 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:41.099 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:41.099 Found net devices under 0000:18:00.0: mlx_0_0 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:41.099 Found net devices under 0000:18:00.1: mlx_0_1 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:41.099 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:41.100 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.100 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:41.100 altname enp24s0f0np0 00:11:41.100 altname ens785f0np0 00:11:41.100 inet 192.168.100.8/24 scope global mlx_0_0 00:11:41.100 valid_lft forever preferred_lft forever 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:41.100 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.100 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:41.100 altname enp24s0f1np1 00:11:41.100 altname ens785f1np1 00:11:41.100 inet 192.168.100.9/24 scope global mlx_0_1 00:11:41.100 valid_lft forever preferred_lft forever 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:41.100 192.168.100.9' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:41.100 192.168.100.9' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:41.100 192.168.100.9' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1570632 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1570632 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1570632 ']' 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.100 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.100 [2024-12-05 13:42:40.511086] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:41.100 [2024-12-05 13:42:40.511136] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.100 [2024-12-05 13:42:40.589152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:41.100 [2024-12-05 13:42:40.612882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.100 [2024-12-05 13:42:40.612921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.100 [2024-12-05 13:42:40.612927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.100 [2024-12-05 13:42:40.612934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.100 [2024-12-05 13:42:40.612939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.100 [2024-12-05 13:42:40.614258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.100 [2024-12-05 13:42:40.614287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.100 [2024-12-05 13:42:40.614288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.101 [2024-12-05 13:42:40.775879] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xba66c0/0xbaabb0) succeed. 00:11:41.101 [2024-12-05 13:42:40.790738] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xba7cb0/0xbec250) succeed. 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.101 Malloc0 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.101 Delay0 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.101 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.101 [2024-12-05 13:42:40.946657] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:41.358 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.358 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:41.358 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.358 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:41.358 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.358 13:42:40 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:41.358 [2024-12-05 13:42:41.067823] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:43.889 Initializing NVMe Controllers 00:11:43.889 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:11:43.889 controller IO queue size 128 less than required 00:11:43.889 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:43.889 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:43.889 Initialization complete. Launching workers. 00:11:43.889 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 46948 00:11:43.889 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47009, failed to submit 62 00:11:43.889 success 46949, unsuccessful 60, failed 0 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:43.889 rmmod nvme_rdma 00:11:43.889 rmmod nvme_fabrics 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1570632 ']' 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1570632 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1570632 ']' 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1570632 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1570632 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1570632' 00:11:43.889 killing process with pid 1570632 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1570632 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1570632 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:43.889 00:11:43.889 real 0m9.369s 00:11:43.889 user 0m12.525s 00:11:43.889 sys 0m5.046s 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:43.889 ************************************ 00:11:43.889 END TEST nvmf_abort 00:11:43.889 ************************************ 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.889 13:42:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:43.889 ************************************ 00:11:43.889 START TEST nvmf_ns_hotplug_stress 00:11:43.889 ************************************ 00:11:43.890 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:11:43.890 * Looking for test storage... 00:11:43.890 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:43.890 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:43.890 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:43.890 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.150 --rc genhtml_branch_coverage=1 00:11:44.150 --rc genhtml_function_coverage=1 00:11:44.150 --rc genhtml_legend=1 00:11:44.150 --rc geninfo_all_blocks=1 00:11:44.150 --rc geninfo_unexecuted_blocks=1 00:11:44.150 00:11:44.150 ' 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.150 --rc genhtml_branch_coverage=1 00:11:44.150 --rc genhtml_function_coverage=1 00:11:44.150 --rc genhtml_legend=1 00:11:44.150 --rc geninfo_all_blocks=1 00:11:44.150 --rc geninfo_unexecuted_blocks=1 00:11:44.150 00:11:44.150 ' 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.150 --rc genhtml_branch_coverage=1 00:11:44.150 --rc genhtml_function_coverage=1 00:11:44.150 --rc genhtml_legend=1 00:11:44.150 --rc geninfo_all_blocks=1 00:11:44.150 --rc geninfo_unexecuted_blocks=1 00:11:44.150 00:11:44.150 ' 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.150 --rc genhtml_branch_coverage=1 00:11:44.150 --rc genhtml_function_coverage=1 00:11:44.150 --rc genhtml_legend=1 00:11:44.150 --rc geninfo_all_blocks=1 00:11:44.150 --rc geninfo_unexecuted_blocks=1 00:11:44.150 00:11:44.150 ' 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.150 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.151 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.151 13:42:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:50.725 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:50.725 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:50.725 Found net devices under 0000:18:00.0: mlx_0_0 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:50.725 Found net devices under 0000:18:00.1: mlx_0_1 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:50.725 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:50.726 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:50.726 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:50.726 altname enp24s0f0np0 00:11:50.726 altname ens785f0np0 00:11:50.726 inet 192.168.100.8/24 scope global mlx_0_0 00:11:50.726 valid_lft forever preferred_lft forever 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:50.726 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:50.726 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:50.726 altname enp24s0f1np1 00:11:50.726 altname ens785f1np1 00:11:50.726 inet 192.168.100.9/24 scope global mlx_0_1 00:11:50.726 valid_lft forever preferred_lft forever 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:50.726 192.168.100.9' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:50.726 192.168.100.9' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:50.726 192.168.100.9' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1574466 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1574466 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1574466 ']' 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.726 13:42:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 [2024-12-05 13:42:49.963166] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:50.726 [2024-12-05 13:42:49.963214] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.726 [2024-12-05 13:42:50.039927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:50.726 [2024-12-05 13:42:50.063360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.726 [2024-12-05 13:42:50.063407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.726 [2024-12-05 13:42:50.063414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.726 [2024-12-05 13:42:50.063420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.726 [2024-12-05 13:42:50.063425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.726 [2024-12-05 13:42:50.064728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.726 [2024-12-05 13:42:50.064834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.726 [2024-12-05 13:42:50.064836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:50.726 [2024-12-05 13:42:50.387089] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21986c0/0x219cbb0) succeed. 00:11:50.726 [2024-12-05 13:42:50.395274] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2199cb0/0x21de250) succeed. 00:11:50.726 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:50.986 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:51.245 [2024-12-05 13:42:50.840984] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:51.245 13:42:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:51.245 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:51.503 Malloc0 00:11:51.503 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:51.761 Delay0 00:11:51.761 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.761 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:52.019 NULL1 00:11:52.019 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:52.278 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1575007 00:11:52.278 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:52.278 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:11:52.278 13:42:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.654 Read completed with error (sct=0, sc=11) 00:11:53.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.654 13:42:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:53.654 13:42:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:53.654 13:42:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:53.912 true 00:11:53.912 13:42:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:11:53.912 13:42:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.895 13:42:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:54.895 13:42:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:54.895 13:42:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:54.895 true 00:11:55.153 13:42:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:11:55.153 13:42:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.719 13:42:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.977 13:42:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:55.977 13:42:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:56.235 true 00:11:56.235 13:42:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:11:56.235 13:42:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 13:42:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:57.172 13:42:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:57.172 13:42:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:57.431 true 00:11:57.431 13:42:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:11:57.431 13:42:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.368 13:42:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.368 13:42:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:58.368 13:42:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:58.626 true 00:11:58.626 13:42:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:11:58.626 13:42:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.563 13:42:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:59.563 13:42:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:59.563 13:42:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:59.823 true 00:11:59.823 13:42:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:11:59.823 13:42:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.761 13:43:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.761 13:43:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:00.761 13:43:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:01.019 true 00:12:01.019 13:43:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:01.019 13:43:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 13:43:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.956 13:43:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:01.956 13:43:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:02.215 true 00:12:02.215 13:43:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:02.215 13:43:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.214 13:43:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.214 13:43:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:03.214 13:43:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:03.472 true 00:12:03.472 13:43:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:03.472 13:43:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.412 13:43:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.412 13:43:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:04.412 13:43:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:04.672 true 00:12:04.672 13:43:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:04.672 13:43:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.607 13:43:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.607 13:43:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:05.607 13:43:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:05.866 true 00:12:05.866 13:43:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:05.866 13:43:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.802 13:43:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.802 13:43:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:06.802 13:43:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:07.062 true 00:12:07.062 13:43:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:07.062 13:43:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.999 13:43:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.999 13:43:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:07.999 13:43:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:08.258 true 00:12:08.258 13:43:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:08.258 13:43:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.196 13:43:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.197 13:43:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:09.197 13:43:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:09.455 true 00:12:09.455 13:43:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:09.455 13:43:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.387 13:43:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.387 13:43:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:10.388 13:43:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:10.645 true 00:12:10.645 13:43:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:10.645 13:43:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.626 13:43:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.626 13:43:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:11.626 13:43:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:11.884 true 00:12:11.884 13:43:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:11.884 13:43:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.819 13:43:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.078 13:43:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:13.078 13:43:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:13.078 true 00:12:13.078 13:43:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:13.078 13:43:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.013 13:43:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.272 13:43:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:14.272 13:43:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:14.272 true 00:12:14.272 13:43:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:14.272 13:43:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.208 13:43:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.466 13:43:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:15.466 13:43:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:15.466 true 00:12:15.466 13:43:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:15.466 13:43:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.403 13:43:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.661 13:43:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:16.661 13:43:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:16.661 true 00:12:16.661 13:43:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:16.661 13:43:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.597 13:43:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.855 13:43:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:17.855 13:43:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:17.855 true 00:12:17.855 13:43:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:17.855 13:43:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.790 13:43:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.049 13:43:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:19.049 13:43:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:19.307 true 00:12:19.307 13:43:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:19.307 13:43:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.243 13:43:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.243 13:43:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:20.243 13:43:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:20.502 true 00:12:20.502 13:43:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:20.502 13:43:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.439 13:43:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.439 13:43:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:21.439 13:43:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:21.699 true 00:12:21.699 13:43:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:21.699 13:43:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.632 13:43:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.632 13:43:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:22.632 13:43:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:22.891 true 00:12:22.891 13:43:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:22.891 13:43:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.891 13:43:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.150 13:43:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:23.150 13:43:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:23.410 true 00:12:23.410 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:23.410 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.410 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.669 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:23.669 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:23.927 true 00:12:23.927 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:23.927 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.186 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.186 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:24.186 13:43:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:24.445 true 00:12:24.445 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:24.445 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.445 Initializing NVMe Controllers 00:12:24.445 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:24.445 Controller IO queue size 128, less than required. 00:12:24.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:24.445 Controller IO queue size 128, less than required. 00:12:24.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:24.445 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:24.445 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:24.445 Initialization complete. Launching workers. 00:12:24.445 ======================================================== 00:12:24.445 Latency(us) 00:12:24.445 Device Information : IOPS MiB/s Average min max 00:12:24.445 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6240.93 3.05 17864.63 837.83 1127808.94 00:12:24.445 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35605.63 17.39 3594.95 1950.24 272093.48 00:12:24.445 ======================================================== 00:12:24.445 Total : 41846.57 20.43 5723.11 837.83 1127808.94 00:12:24.445 00:12:24.704 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.704 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:24.704 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:24.963 true 00:12:24.963 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1575007 00:12:24.963 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1575007) - No such process 00:12:24.963 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1575007 00:12:24.963 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.223 13:43:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:25.223 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:25.223 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:25.223 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:25.223 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.223 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:25.481 null0 00:12:25.481 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.481 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.481 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:25.738 null1 00:12:25.738 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.738 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.738 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:25.738 null2 00:12:25.997 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.997 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.997 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:25.997 null3 00:12:25.997 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.997 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.997 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:26.256 null4 00:12:26.256 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.256 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.256 13:43:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:26.598 null5 00:12:26.598 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.598 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.598 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:26.598 null6 00:12:26.598 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.598 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.598 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:26.941 null7 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.941 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1581408 1581410 1581411 1581413 1581415 1581417 1581419 1581421 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.942 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.211 13:43:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.470 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.729 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.729 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.729 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.729 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.729 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.729 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:27.729 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.729 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.989 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.248 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.248 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.248 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.248 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.248 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.248 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.248 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.248 13:43:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.248 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.508 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.508 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.508 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.508 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.508 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.508 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.508 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.508 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.810 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.068 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.069 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.069 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.069 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.069 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:29.069 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.069 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.069 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.327 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:29.327 13:43:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.327 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.585 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.843 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:30.101 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.101 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:30.101 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.101 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.101 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:30.101 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.102 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.359 13:43:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:30.359 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.359 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:30.359 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.359 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:30.359 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.359 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:30.359 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:30.359 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.617 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:30.618 rmmod nvme_rdma 00:12:30.618 rmmod nvme_fabrics 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1574466 ']' 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1574466 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1574466 ']' 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1574466 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574466 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574466' 00:12:30.618 killing process with pid 1574466 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1574466 00:12:30.618 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1574466 00:12:30.876 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.876 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:30.876 00:12:30.876 real 0m47.090s 00:12:30.876 user 3m16.767s 00:12:30.876 sys 0m11.719s 00:12:30.876 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.876 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.876 ************************************ 00:12:30.876 END TEST nvmf_ns_hotplug_stress 00:12:30.876 ************************************ 00:12:30.876 13:43:30 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:30.876 13:43:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.876 13:43:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.876 13:43:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:31.135 ************************************ 00:12:31.135 START TEST nvmf_delete_subsystem 00:12:31.135 ************************************ 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:31.135 * Looking for test storage... 00:12:31.135 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:31.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.135 --rc genhtml_branch_coverage=1 00:12:31.135 --rc genhtml_function_coverage=1 00:12:31.135 --rc genhtml_legend=1 00:12:31.135 --rc geninfo_all_blocks=1 00:12:31.135 --rc geninfo_unexecuted_blocks=1 00:12:31.135 00:12:31.135 ' 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:31.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.135 --rc genhtml_branch_coverage=1 00:12:31.135 --rc genhtml_function_coverage=1 00:12:31.135 --rc genhtml_legend=1 00:12:31.135 --rc geninfo_all_blocks=1 00:12:31.135 --rc geninfo_unexecuted_blocks=1 00:12:31.135 00:12:31.135 ' 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:31.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.135 --rc genhtml_branch_coverage=1 00:12:31.135 --rc genhtml_function_coverage=1 00:12:31.135 --rc genhtml_legend=1 00:12:31.135 --rc geninfo_all_blocks=1 00:12:31.135 --rc geninfo_unexecuted_blocks=1 00:12:31.135 00:12:31.135 ' 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:31.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.135 --rc genhtml_branch_coverage=1 00:12:31.135 --rc genhtml_function_coverage=1 00:12:31.135 --rc genhtml_legend=1 00:12:31.135 --rc geninfo_all_blocks=1 00:12:31.135 --rc geninfo_unexecuted_blocks=1 00:12:31.135 00:12:31.135 ' 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.135 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.136 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:31.136 13:43:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.736 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.736 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.736 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.736 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.736 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.736 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.736 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:37.737 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:37.737 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:37.737 Found net devices under 0000:18:00.0: mlx_0_0 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:37.737 Found net devices under 0000:18:00.1: mlx_0_1 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.737 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:37.738 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.738 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:37.738 altname enp24s0f0np0 00:12:37.738 altname ens785f0np0 00:12:37.738 inet 192.168.100.8/24 scope global mlx_0_0 00:12:37.738 valid_lft forever preferred_lft forever 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:37.738 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.738 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:37.738 altname enp24s0f1np1 00:12:37.738 altname ens785f1np1 00:12:37.738 inet 192.168.100.9/24 scope global mlx_0_1 00:12:37.738 valid_lft forever preferred_lft forever 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:37.738 13:43:36 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:37.738 192.168.100.9' 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:37.738 192.168.100.9' 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:37.738 192.168.100.9' 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1585624 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1585624 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1585624 ']' 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.738 [2024-12-05 13:43:37.105794] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:37.738 [2024-12-05 13:43:37.105845] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.738 [2024-12-05 13:43:37.179627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:37.738 [2024-12-05 13:43:37.200881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.738 [2024-12-05 13:43:37.200916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.738 [2024-12-05 13:43:37.200923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.738 [2024-12-05 13:43:37.200928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.738 [2024-12-05 13:43:37.200936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.738 [2024-12-05 13:43:37.201994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.738 [2024-12-05 13:43:37.201995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.738 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.739 [2024-12-05 13:43:37.354938] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1914860/0x1918d50) succeed. 00:12:37.739 [2024-12-05 13:43:37.362757] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1915db0/0x195a3f0) succeed. 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.739 [2024-12-05 13:43:37.452092] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.739 NULL1 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.739 Delay0 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1585650 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:37.739 13:43:37 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:37.739 [2024-12-05 13:43:37.574856] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:39.644 13:43:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.644 13:43:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.644 13:43:39 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:41.020 NVMe io qpair process completion error 00:12:41.020 NVMe io qpair process completion error 00:12:41.020 NVMe io qpair process completion error 00:12:41.020 NVMe io qpair process completion error 00:12:41.020 NVMe io qpair process completion error 00:12:41.020 NVMe io qpair process completion error 00:12:41.020 13:43:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.020 13:43:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:41.020 13:43:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1585650 00:12:41.020 13:43:40 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:41.586 13:43:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:41.587 13:43:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1585650 00:12:41.587 13:43:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Read completed with error (sct=0, sc=8) 00:12:41.846 starting I/O failed: -6 00:12:41.846 Write completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 starting I/O failed: -6 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Read completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Write completed with error (sct=0, sc=8) 00:12:41.847 Initializing NVMe Controllers 00:12:41.847 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:41.847 Controller IO queue size 128, less than required. 00:12:41.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:41.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:41.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:41.847 Initialization complete. Launching workers. 00:12:41.847 ======================================================== 00:12:41.847 Latency(us) 00:12:41.847 Device Information : IOPS MiB/s Average min max 00:12:41.847 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.51 0.04 1593327.42 1000083.10 2974958.07 00:12:41.847 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.51 0.04 1594476.15 1001117.70 2975519.48 00:12:41.847 ======================================================== 00:12:41.847 Total : 161.02 0.08 1593901.79 1000083.10 2975519.48 00:12:41.847 00:12:41.847 13:43:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:41.847 13:43:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1585650 00:12:41.847 13:43:41 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:41.847 [2024-12-05 13:43:41.665982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:12:41.847 [2024-12-05 13:43:41.666023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:12:41.847 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1585650 00:12:42.415 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1585650) - No such process 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1585650 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1585650 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1585650 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:42.415 [2024-12-05 13:43:42.184478] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1586627 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:42.415 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:42.674 [2024-12-05 13:43:42.289382] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:42.932 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:42.932 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:42.932 13:43:42 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:43.499 13:43:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:43.499 13:43:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:43.499 13:43:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:44.064 13:43:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:44.064 13:43:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:44.064 13:43:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:44.656 13:43:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:44.656 13:43:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:44.656 13:43:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:44.913 13:43:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:44.913 13:43:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:44.913 13:43:44 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:45.478 13:43:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:45.478 13:43:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:45.478 13:43:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:46.042 13:43:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:46.042 13:43:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:46.042 13:43:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:46.607 13:43:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:46.607 13:43:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:46.607 13:43:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:47.171 13:43:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:47.171 13:43:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:47.171 13:43:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:47.429 13:43:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:47.429 13:43:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:47.429 13:43:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:47.995 13:43:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:47.995 13:43:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:47.995 13:43:47 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:48.560 13:43:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:48.560 13:43:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:48.560 13:43:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:49.126 13:43:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:49.126 13:43:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:49.126 13:43:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:49.693 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:49.693 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:49.693 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:49.693 Initializing NVMe Controllers 00:12:49.693 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:49.693 Controller IO queue size 128, less than required. 00:12:49.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:49.693 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:49.693 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:49.693 Initialization complete. Launching workers. 00:12:49.693 ======================================================== 00:12:49.693 Latency(us) 00:12:49.693 Device Information : IOPS MiB/s Average min max 00:12:49.693 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001674.89 1000056.09 1004366.59 00:12:49.693 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002639.82 1000061.31 1006532.04 00:12:49.693 ======================================================== 00:12:49.693 Total : 256.00 0.12 1002157.36 1000056.09 1006532.04 00:12:49.693 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1586627 00:12:49.952 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1586627) - No such process 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1586627 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.952 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:49.952 rmmod nvme_rdma 00:12:49.952 rmmod nvme_fabrics 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1585624 ']' 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1585624 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1585624 ']' 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1585624 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1585624 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1585624' 00:12:50.211 killing process with pid 1585624 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1585624 00:12:50.211 13:43:49 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1585624 00:12:50.470 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:50.470 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:50.470 00:12:50.470 real 0m19.319s 00:12:50.470 user 0m48.827s 00:12:50.470 sys 0m5.619s 00:12:50.470 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.470 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:50.470 ************************************ 00:12:50.471 END TEST nvmf_delete_subsystem 00:12:50.471 ************************************ 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:50.471 ************************************ 00:12:50.471 START TEST nvmf_host_management 00:12:50.471 ************************************ 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:50.471 * Looking for test storage... 00:12:50.471 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:50.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.471 --rc genhtml_branch_coverage=1 00:12:50.471 --rc genhtml_function_coverage=1 00:12:50.471 --rc genhtml_legend=1 00:12:50.471 --rc geninfo_all_blocks=1 00:12:50.471 --rc geninfo_unexecuted_blocks=1 00:12:50.471 00:12:50.471 ' 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:50.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.471 --rc genhtml_branch_coverage=1 00:12:50.471 --rc genhtml_function_coverage=1 00:12:50.471 --rc genhtml_legend=1 00:12:50.471 --rc geninfo_all_blocks=1 00:12:50.471 --rc geninfo_unexecuted_blocks=1 00:12:50.471 00:12:50.471 ' 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:50.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.471 --rc genhtml_branch_coverage=1 00:12:50.471 --rc genhtml_function_coverage=1 00:12:50.471 --rc genhtml_legend=1 00:12:50.471 --rc geninfo_all_blocks=1 00:12:50.471 --rc geninfo_unexecuted_blocks=1 00:12:50.471 00:12:50.471 ' 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:50.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.471 --rc genhtml_branch_coverage=1 00:12:50.471 --rc genhtml_function_coverage=1 00:12:50.471 --rc genhtml_legend=1 00:12:50.471 --rc geninfo_all_blocks=1 00:12:50.471 --rc geninfo_unexecuted_blocks=1 00:12:50.471 00:12:50.471 ' 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:50.471 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.731 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.731 13:43:50 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:57.327 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:57.327 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:57.327 Found net devices under 0000:18:00.0: mlx_0_0 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:57.327 Found net devices under 0000:18:00.1: mlx_0_1 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.327 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:57.328 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:57.328 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:57.328 altname enp24s0f0np0 00:12:57.328 altname ens785f0np0 00:12:57.328 inet 192.168.100.8/24 scope global mlx_0_0 00:12:57.328 valid_lft forever preferred_lft forever 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:57.328 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:57.328 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:57.328 altname enp24s0f1np1 00:12:57.328 altname ens785f1np1 00:12:57.328 inet 192.168.100.9/24 scope global mlx_0_1 00:12:57.328 valid_lft forever preferred_lft forever 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:57.328 192.168.100.9' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:57.328 192.168.100.9' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:57.328 192.168.100.9' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1591292 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1591292 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1591292 ']' 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.328 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.328 [2024-12-05 13:43:56.521825] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:57.328 [2024-12-05 13:43:56.521878] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.328 [2024-12-05 13:43:56.601409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.328 [2024-12-05 13:43:56.624656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.328 [2024-12-05 13:43:56.624694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.328 [2024-12-05 13:43:56.624700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.328 [2024-12-05 13:43:56.624705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.328 [2024-12-05 13:43:56.624710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.328 [2024-12-05 13:43:56.626141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.329 [2024-12-05 13:43:56.626247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.329 [2024-12-05 13:43:56.626353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.329 [2024-12-05 13:43:56.626355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 [2024-12-05 13:43:56.776513] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19a1230/0x19a5720) succeed. 00:12:57.329 [2024-12-05 13:43:56.784830] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19a28c0/0x19e6dc0) succeed. 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 Malloc0 00:12:57.329 [2024-12-05 13:43:56.960764] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.329 13:43:56 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1591589 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1591589 /var/tmp/bdevperf.sock 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1591589 ']' 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:57.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:57.329 { 00:12:57.329 "params": { 00:12:57.329 "name": "Nvme$subsystem", 00:12:57.329 "trtype": "$TEST_TRANSPORT", 00:12:57.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:57.329 "adrfam": "ipv4", 00:12:57.329 "trsvcid": "$NVMF_PORT", 00:12:57.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:57.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:57.329 "hdgst": ${hdgst:-false}, 00:12:57.329 "ddgst": ${ddgst:-false} 00:12:57.329 }, 00:12:57.329 "method": "bdev_nvme_attach_controller" 00:12:57.329 } 00:12:57.329 EOF 00:12:57.329 )") 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:57.329 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:57.329 "params": { 00:12:57.329 "name": "Nvme0", 00:12:57.329 "trtype": "rdma", 00:12:57.329 "traddr": "192.168.100.8", 00:12:57.329 "adrfam": "ipv4", 00:12:57.329 "trsvcid": "4420", 00:12:57.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:57.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:57.329 "hdgst": false, 00:12:57.329 "ddgst": false 00:12:57.329 }, 00:12:57.329 "method": "bdev_nvme_attach_controller" 00:12:57.329 }' 00:12:57.329 [2024-12-05 13:43:57.054496] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:57.329 [2024-12-05 13:43:57.054540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591589 ] 00:12:57.329 [2024-12-05 13:43:57.130328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.329 [2024-12-05 13:43:57.151498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.588 Running I/O for 10 seconds... 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=174 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 174 -ge 100 ']' 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.589 13:43:57 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:58.785 299.00 IOPS, 18.69 MiB/s [2024-12-05T12:43:58.638Z] [2024-12-05 13:43:58.435781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ceff80 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016cdff00 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ccfe80 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016cbfe00 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016cafd80 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c9fd00 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c8fc80 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c7fc00 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c6fb80 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c5fb00 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c4fa80 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c3fa00 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.435987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c2f980 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.435993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c1f900 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.436007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016c0f880 len:0x10000 key:0x181900 00:12:58.785 [2024-12-05 13:43:58.436020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000084b100 len:0x10000 key:0x182a00 00:12:58.785 [2024-12-05 13:43:58.436035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083b080 len:0x10000 key:0x182a00 00:12:58.785 [2024-12-05 13:43:58.436049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082b000 len:0x10000 key:0x182a00 00:12:58.785 [2024-12-05 13:43:58.436066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081af80 len:0x10000 key:0x182a00 00:12:58.785 [2024-12-05 13:43:58.436081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080af00 len:0x10000 key:0x182a00 00:12:58.785 [2024-12-05 13:43:58.436095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106e1c80 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106d1c00 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106c1b80 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106b1b00 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106a1a80 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010691a00 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010681980 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010671900 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.785 [2024-12-05 13:43:58.436215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010661880 len:0x10000 key:0x182e00 00:12:58.785 [2024-12-05 13:43:58.436220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010651800 len:0x10000 key:0x182e00 00:12:58.786 [2024-12-05 13:43:58.436234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010641780 len:0x10000 key:0x182e00 00:12:58.786 [2024-12-05 13:43:58.436248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010631700 len:0x10000 key:0x182e00 00:12:58.786 [2024-12-05 13:43:58.436262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010621680 len:0x10000 key:0x182e00 00:12:58.786 [2024-12-05 13:43:58.436276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010611600 len:0x10000 key:0x182e00 00:12:58.786 [2024-12-05 13:43:58.436294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010601580 len:0x10000 key:0x182e00 00:12:58.786 [2024-12-05 13:43:58.436308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170d1e80 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170c1e00 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170b1d80 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170a1d00 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017091c80 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017081c00 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017071b80 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017061b00 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017051a80 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017041a00 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017031980 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017021900 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017011880 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017001800 len:0x10000 key:0x181d00 00:12:58.786 [2024-12-05 13:43:58.436505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016eeff80 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016edff00 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ecfe80 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ebfe00 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016eafd80 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e9fd00 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e8fc80 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e7fc00 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e6fb80 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e5fb00 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e4fa80 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e3fa00 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e2f980 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e1f900 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 [2024-12-05 13:43:58.436707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e0f880 len:0x10000 key:0x181a00 00:12:58.786 [2024-12-05 13:43:58.436713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fd2c3000 sqhd:7210 p:0 m:0 dnr:0 00:12:58.786 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1591589 00:12:58.786 [2024-12-05 13:43:58.439334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:58.787 { 00:12:58.787 "params": { 00:12:58.787 "name": "Nvme$subsystem", 00:12:58.787 "trtype": "$TEST_TRANSPORT", 00:12:58.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:58.787 "adrfam": "ipv4", 00:12:58.787 "trsvcid": "$NVMF_PORT", 00:12:58.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:58.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:58.787 "hdgst": ${hdgst:-false}, 00:12:58.787 "ddgst": ${ddgst:-false} 00:12:58.787 }, 00:12:58.787 "method": "bdev_nvme_attach_controller" 00:12:58.787 } 00:12:58.787 EOF 00:12:58.787 )") 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:58.787 13:43:58 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:58.787 "params": { 00:12:58.787 "name": "Nvme0", 00:12:58.787 "trtype": "rdma", 00:12:58.787 "traddr": "192.168.100.8", 00:12:58.787 "adrfam": "ipv4", 00:12:58.787 "trsvcid": "4420", 00:12:58.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:58.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:58.787 "hdgst": false, 00:12:58.787 "ddgst": false 00:12:58.787 }, 00:12:58.787 "method": "bdev_nvme_attach_controller" 00:12:58.787 }' 00:12:58.787 [2024-12-05 13:43:58.489992] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:58.787 [2024-12-05 13:43:58.490031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591864 ] 00:12:58.787 [2024-12-05 13:43:58.563961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.787 [2024-12-05 13:43:58.585238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.046 Running I/O for 1 seconds... 00:12:59.982 3304.00 IOPS, 206.50 MiB/s 00:12:59.982 Latency(us) 00:12:59.982 [2024-12-05T12:43:59.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.982 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:59.982 Verification LBA range: start 0x0 length 0x400 00:12:59.982 Nvme0n1 : 1.01 3319.81 207.49 0.00 0.00 18897.21 606.81 31651.46 00:12:59.982 [2024-12-05T12:43:59.835Z] =================================================================================================================== 00:12:59.982 [2024-12-05T12:43:59.835Z] Total : 3319.81 207.49 0.00 0.00 18897.21 606.81 31651.46 00:13:00.241 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1591589 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.241 13:43:59 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:00.241 rmmod nvme_rdma 00:13:00.241 rmmod nvme_fabrics 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1591292 ']' 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1591292 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1591292 ']' 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1591292 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1591292 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1591292' 00:13:00.241 killing process with pid 1591292 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1591292 00:13:00.241 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1591292 00:13:00.499 [2024-12-05 13:44:00.287731] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:00.499 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:00.499 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:00.499 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:00.499 00:13:00.499 real 0m10.172s 00:13:00.499 user 0m19.083s 00:13:00.499 sys 0m5.520s 00:13:00.499 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.500 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:00.500 ************************************ 00:13:00.500 END TEST nvmf_host_management 00:13:00.500 ************************************ 00:13:00.500 13:44:00 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:00.500 13:44:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.500 13:44:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.500 13:44:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:00.759 ************************************ 00:13:00.759 START TEST nvmf_lvol 00:13:00.759 ************************************ 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:00.759 * Looking for test storage... 00:13:00.759 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:00.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.759 --rc genhtml_branch_coverage=1 00:13:00.759 --rc genhtml_function_coverage=1 00:13:00.759 --rc genhtml_legend=1 00:13:00.759 --rc geninfo_all_blocks=1 00:13:00.759 --rc geninfo_unexecuted_blocks=1 00:13:00.759 00:13:00.759 ' 00:13:00.759 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:00.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.759 --rc genhtml_branch_coverage=1 00:13:00.760 --rc genhtml_function_coverage=1 00:13:00.760 --rc genhtml_legend=1 00:13:00.760 --rc geninfo_all_blocks=1 00:13:00.760 --rc geninfo_unexecuted_blocks=1 00:13:00.760 00:13:00.760 ' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.760 --rc genhtml_branch_coverage=1 00:13:00.760 --rc genhtml_function_coverage=1 00:13:00.760 --rc genhtml_legend=1 00:13:00.760 --rc geninfo_all_blocks=1 00:13:00.760 --rc geninfo_unexecuted_blocks=1 00:13:00.760 00:13:00.760 ' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:00.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.760 --rc genhtml_branch_coverage=1 00:13:00.760 --rc genhtml_function_coverage=1 00:13:00.760 --rc genhtml_legend=1 00:13:00.760 --rc geninfo_all_blocks=1 00:13:00.760 --rc geninfo_unexecuted_blocks=1 00:13:00.760 00:13:00.760 ' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.760 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.760 13:44:00 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:07.380 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:07.380 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:07.380 Found net devices under 0000:18:00.0: mlx_0_0 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:07.380 Found net devices under 0000:18:00.1: mlx_0_1 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.380 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:07.381 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:07.381 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:07.381 altname enp24s0f0np0 00:13:07.381 altname ens785f0np0 00:13:07.381 inet 192.168.100.8/24 scope global mlx_0_0 00:13:07.381 valid_lft forever preferred_lft forever 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:07.381 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:07.381 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:07.381 altname enp24s0f1np1 00:13:07.381 altname ens785f1np1 00:13:07.381 inet 192.168.100.9/24 scope global mlx_0_1 00:13:07.381 valid_lft forever preferred_lft forever 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:07.381 192.168.100.9' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:07.381 192.168.100.9' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:07.381 192.168.100.9' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1595400 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1595400 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1595400 ']' 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:07.381 [2024-12-05 13:44:06.756175] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:07.381 [2024-12-05 13:44:06.756226] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.381 [2024-12-05 13:44:06.830438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:07.381 [2024-12-05 13:44:06.852820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.381 [2024-12-05 13:44:06.852857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.381 [2024-12-05 13:44:06.852863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.381 [2024-12-05 13:44:06.852872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.381 [2024-12-05 13:44:06.852877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.381 [2024-12-05 13:44:06.854093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.381 [2024-12-05 13:44:06.854202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.381 [2024-12-05 13:44:06.854204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:07.381 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.382 13:44:06 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:07.382 [2024-12-05 13:44:07.159995] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x219e3c0/0x21a28b0) succeed. 00:13:07.382 [2024-12-05 13:44:07.168057] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x219f9b0/0x21e3f50) succeed. 00:13:07.640 13:44:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:07.640 13:44:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:07.640 13:44:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:07.899 13:44:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:07.899 13:44:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:08.158 13:44:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:08.416 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cd830770-6873-4f3c-a748-1b4ee48bfcbf 00:13:08.416 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cd830770-6873-4f3c-a748-1b4ee48bfcbf lvol 20 00:13:08.416 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d5520fc9-1ec0-4c52-bd99-c1516d63cb19 00:13:08.416 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:08.675 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5520fc9-1ec0-4c52-bd99-c1516d63cb19 00:13:08.934 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:08.934 [2024-12-05 13:44:08.760314] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:08.934 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:09.193 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1595949 00:13:09.193 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:09.193 13:44:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:10.129 13:44:09 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d5520fc9-1ec0-4c52-bd99-c1516d63cb19 MY_SNAPSHOT 00:13:10.388 13:44:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f423dc2e-f83d-4407-bee4-724df68f0e38 00:13:10.388 13:44:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d5520fc9-1ec0-4c52-bd99-c1516d63cb19 30 00:13:10.647 13:44:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f423dc2e-f83d-4407-bee4-724df68f0e38 MY_CLONE 00:13:10.907 13:44:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=44fa639c-b240-4c65-922a-39394022fe0b 00:13:10.907 13:44:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 44fa639c-b240-4c65-922a-39394022fe0b 00:13:10.907 13:44:10 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1595949 00:13:20.878 Initializing NVMe Controllers 00:13:20.878 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:13:20.878 Controller IO queue size 128, less than required. 00:13:20.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:20.878 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:20.878 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:20.878 Initialization complete. Launching workers. 00:13:20.878 ======================================================== 00:13:20.878 Latency(us) 00:13:20.878 Device Information : IOPS MiB/s Average min max 00:13:20.878 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17664.70 69.00 7248.29 2107.77 44489.03 00:13:20.878 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17600.00 68.75 7274.28 2861.31 37015.03 00:13:20.878 ======================================================== 00:13:20.878 Total : 35264.70 137.75 7261.26 2107.77 44489.03 00:13:20.878 00:13:20.878 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:20.878 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5520fc9-1ec0-4c52-bd99-c1516d63cb19 00:13:20.879 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd830770-6873-4f3c-a748-1b4ee48bfcbf 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:21.137 rmmod nvme_rdma 00:13:21.137 rmmod nvme_fabrics 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1595400 ']' 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1595400 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1595400 ']' 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1595400 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1595400 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1595400' 00:13:21.137 killing process with pid 1595400 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1595400 00:13:21.137 13:44:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1595400 00:13:21.396 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.396 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:21.396 00:13:21.396 real 0m20.853s 00:13:21.396 user 1m9.390s 00:13:21.396 sys 0m5.668s 00:13:21.396 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.396 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.396 ************************************ 00:13:21.396 END TEST nvmf_lvol 00:13:21.396 ************************************ 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:21.656 ************************************ 00:13:21.656 START TEST nvmf_lvs_grow 00:13:21.656 ************************************ 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:21.656 * Looking for test storage... 00:13:21.656 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:13:21.656 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:21.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.657 --rc genhtml_branch_coverage=1 00:13:21.657 --rc genhtml_function_coverage=1 00:13:21.657 --rc genhtml_legend=1 00:13:21.657 --rc geninfo_all_blocks=1 00:13:21.657 --rc geninfo_unexecuted_blocks=1 00:13:21.657 00:13:21.657 ' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:21.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.657 --rc genhtml_branch_coverage=1 00:13:21.657 --rc genhtml_function_coverage=1 00:13:21.657 --rc genhtml_legend=1 00:13:21.657 --rc geninfo_all_blocks=1 00:13:21.657 --rc geninfo_unexecuted_blocks=1 00:13:21.657 00:13:21.657 ' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:21.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.657 --rc genhtml_branch_coverage=1 00:13:21.657 --rc genhtml_function_coverage=1 00:13:21.657 --rc genhtml_legend=1 00:13:21.657 --rc geninfo_all_blocks=1 00:13:21.657 --rc geninfo_unexecuted_blocks=1 00:13:21.657 00:13:21.657 ' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:21.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.657 --rc genhtml_branch_coverage=1 00:13:21.657 --rc genhtml_function_coverage=1 00:13:21.657 --rc genhtml_legend=1 00:13:21.657 --rc geninfo_all_blocks=1 00:13:21.657 --rc geninfo_unexecuted_blocks=1 00:13:21.657 00:13:21.657 ' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:21.657 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:21.658 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:21.658 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.917 13:44:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:28.487 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:28.487 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:28.487 Found net devices under 0000:18:00.0: mlx_0_0 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.487 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:28.488 Found net devices under 0000:18:00.1: mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:28.488 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.488 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:28.488 altname enp24s0f0np0 00:13:28.488 altname ens785f0np0 00:13:28.488 inet 192.168.100.8/24 scope global mlx_0_0 00:13:28.488 valid_lft forever preferred_lft forever 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:28.488 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.488 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:28.488 altname enp24s0f1np1 00:13:28.488 altname ens785f1np1 00:13:28.488 inet 192.168.100.9/24 scope global mlx_0_1 00:13:28.488 valid_lft forever preferred_lft forever 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:28.488 192.168.100.9' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:28.488 192.168.100.9' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:28.488 192.168.100.9' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:28.488 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1601537 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1601537 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1601537 ']' 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:28.489 [2024-12-05 13:44:27.694117] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:28.489 [2024-12-05 13:44:27.694169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.489 [2024-12-05 13:44:27.768916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.489 [2024-12-05 13:44:27.790240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.489 [2024-12-05 13:44:27.790277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.489 [2024-12-05 13:44:27.790284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.489 [2024-12-05 13:44:27.790289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.489 [2024-12-05 13:44:27.790294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.489 [2024-12-05 13:44:27.790765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.489 13:44:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:28.489 [2024-12-05 13:44:28.090071] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5ebd00/0x5f01f0) succeed. 00:13:28.489 [2024-12-05 13:44:28.097719] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5ed1b0/0x631890) succeed. 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:28.489 ************************************ 00:13:28.489 START TEST lvs_grow_clean 00:13:28.489 ************************************ 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:28.489 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:28.748 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:28.748 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:28.748 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=67d8005f-4322-43f7-a62c-6015558f865d 00:13:28.748 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:28.748 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:29.006 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:29.006 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:29.006 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 67d8005f-4322-43f7-a62c-6015558f865d lvol 150 00:13:29.264 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2e815413-46b0-406c-a366-3d17cf4904e5 00:13:29.264 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.264 13:44:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:29.264 [2024-12-05 13:44:29.078135] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:29.264 [2024-12-05 13:44:29.078185] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:29.264 true 00:13:29.264 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:29.264 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:29.523 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:29.523 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:29.782 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2e815413-46b0-406c-a366-3d17cf4904e5 00:13:29.782 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:30.041 [2024-12-05 13:44:29.740275] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:30.041 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1601898 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1601898 /var/tmp/bdevperf.sock 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1601898 ']' 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.300 13:44:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:30.300 [2024-12-05 13:44:29.945033] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:30.300 [2024-12-05 13:44:29.945075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601898 ] 00:13:30.300 [2024-12-05 13:44:30.015720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.300 [2024-12-05 13:44:30.037302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.300 13:44:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.300 13:44:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:13:30.300 13:44:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:30.559 Nvme0n1 00:13:30.559 13:44:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:30.818 [ 00:13:30.818 { 00:13:30.818 "name": "Nvme0n1", 00:13:30.818 "aliases": [ 00:13:30.818 "2e815413-46b0-406c-a366-3d17cf4904e5" 00:13:30.818 ], 00:13:30.818 "product_name": "NVMe disk", 00:13:30.818 "block_size": 4096, 00:13:30.818 "num_blocks": 38912, 00:13:30.818 "uuid": "2e815413-46b0-406c-a366-3d17cf4904e5", 00:13:30.818 "numa_id": 0, 00:13:30.818 "assigned_rate_limits": { 00:13:30.818 "rw_ios_per_sec": 0, 00:13:30.818 "rw_mbytes_per_sec": 0, 00:13:30.818 "r_mbytes_per_sec": 0, 00:13:30.818 "w_mbytes_per_sec": 0 00:13:30.818 }, 00:13:30.818 "claimed": false, 00:13:30.818 "zoned": false, 00:13:30.818 "supported_io_types": { 00:13:30.818 "read": true, 00:13:30.818 "write": true, 00:13:30.818 "unmap": true, 00:13:30.818 "flush": true, 00:13:30.818 "reset": true, 00:13:30.818 "nvme_admin": true, 00:13:30.818 "nvme_io": true, 00:13:30.818 "nvme_io_md": false, 00:13:30.818 "write_zeroes": true, 00:13:30.818 "zcopy": false, 00:13:30.818 "get_zone_info": false, 00:13:30.818 "zone_management": false, 00:13:30.818 "zone_append": false, 00:13:30.818 "compare": true, 00:13:30.818 "compare_and_write": true, 00:13:30.818 "abort": true, 00:13:30.818 "seek_hole": false, 00:13:30.818 "seek_data": false, 00:13:30.818 "copy": true, 00:13:30.818 "nvme_iov_md": false 00:13:30.818 }, 00:13:30.818 "memory_domains": [ 00:13:30.818 { 00:13:30.818 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:30.818 "dma_device_type": 0 00:13:30.818 } 00:13:30.818 ], 00:13:30.818 "driver_specific": { 00:13:30.818 "nvme": [ 00:13:30.818 { 00:13:30.818 "trid": { 00:13:30.818 "trtype": "RDMA", 00:13:30.818 "adrfam": "IPv4", 00:13:30.818 "traddr": "192.168.100.8", 00:13:30.818 "trsvcid": "4420", 00:13:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:30.818 }, 00:13:30.818 "ctrlr_data": { 00:13:30.818 "cntlid": 1, 00:13:30.819 "vendor_id": "0x8086", 00:13:30.819 "model_number": "SPDK bdev Controller", 00:13:30.819 "serial_number": "SPDK0", 00:13:30.819 "firmware_revision": "25.01", 00:13:30.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:30.819 "oacs": { 00:13:30.819 "security": 0, 00:13:30.819 "format": 0, 00:13:30.819 "firmware": 0, 00:13:30.819 "ns_manage": 0 00:13:30.819 }, 00:13:30.819 "multi_ctrlr": true, 00:13:30.819 "ana_reporting": false 00:13:30.819 }, 00:13:30.819 "vs": { 00:13:30.819 "nvme_version": "1.3" 00:13:30.819 }, 00:13:30.819 "ns_data": { 00:13:30.819 "id": 1, 00:13:30.819 "can_share": true 00:13:30.819 } 00:13:30.819 } 00:13:30.819 ], 00:13:30.819 "mp_policy": "active_passive" 00:13:30.819 } 00:13:30.819 } 00:13:30.819 ] 00:13:30.819 13:44:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:30.819 13:44:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1602151 00:13:30.819 13:44:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:30.819 Running I/O for 10 seconds... 00:13:32.196 Latency(us) 00:13:32.196 [2024-12-05T12:44:32.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.196 Nvme0n1 : 1.00 37122.00 145.01 0.00 0.00 0.00 0.00 0.00 00:13:32.196 [2024-12-05T12:44:32.049Z] =================================================================================================================== 00:13:32.196 [2024-12-05T12:44:32.049Z] Total : 37122.00 145.01 0.00 0.00 0.00 0.00 0.00 00:13:32.196 00:13:32.765 13:44:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:33.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.024 Nvme0n1 : 2.00 37472.00 146.38 0.00 0.00 0.00 0.00 0.00 00:13:33.024 [2024-12-05T12:44:32.877Z] =================================================================================================================== 00:13:33.024 [2024-12-05T12:44:32.877Z] Total : 37472.00 146.38 0.00 0.00 0.00 0.00 0.00 00:13:33.024 00:13:33.024 true 00:13:33.024 13:44:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:33.024 13:44:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:33.283 13:44:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:33.283 13:44:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:33.283 13:44:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1602151 00:13:33.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.851 Nvme0n1 : 3.00 37525.33 146.58 0.00 0.00 0.00 0.00 0.00 00:13:33.851 [2024-12-05T12:44:33.704Z] =================================================================================================================== 00:13:33.851 [2024-12-05T12:44:33.704Z] Total : 37525.33 146.58 0.00 0.00 0.00 0.00 0.00 00:13:33.851 00:13:35.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.229 Nvme0n1 : 4.00 37624.25 146.97 0.00 0.00 0.00 0.00 0.00 00:13:35.229 [2024-12-05T12:44:35.083Z] =================================================================================================================== 00:13:35.230 [2024-12-05T12:44:35.083Z] Total : 37624.25 146.97 0.00 0.00 0.00 0.00 0.00 00:13:35.230 00:13:36.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.163 Nvme0n1 : 5.00 37702.00 147.27 0.00 0.00 0.00 0.00 0.00 00:13:36.163 [2024-12-05T12:44:36.016Z] =================================================================================================================== 00:13:36.163 [2024-12-05T12:44:36.016Z] Total : 37702.00 147.27 0.00 0.00 0.00 0.00 0.00 00:13:36.163 00:13:37.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.099 Nvme0n1 : 6.00 37754.50 147.48 0.00 0.00 0.00 0.00 0.00 00:13:37.099 [2024-12-05T12:44:36.952Z] =================================================================================================================== 00:13:37.099 [2024-12-05T12:44:36.952Z] Total : 37754.50 147.48 0.00 0.00 0.00 0.00 0.00 00:13:37.099 00:13:38.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.034 Nvme0n1 : 7.00 37796.14 147.64 0.00 0.00 0.00 0.00 0.00 00:13:38.034 [2024-12-05T12:44:37.887Z] =================================================================================================================== 00:13:38.034 [2024-12-05T12:44:37.887Z] Total : 37796.14 147.64 0.00 0.00 0.00 0.00 0.00 00:13:38.034 00:13:38.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.970 Nvme0n1 : 8.00 37828.12 147.77 0.00 0.00 0.00 0.00 0.00 00:13:38.970 [2024-12-05T12:44:38.823Z] =================================================================================================================== 00:13:38.970 [2024-12-05T12:44:38.823Z] Total : 37828.12 147.77 0.00 0.00 0.00 0.00 0.00 00:13:38.970 00:13:39.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.907 Nvme0n1 : 9.00 37852.33 147.86 0.00 0.00 0.00 0.00 0.00 00:13:39.907 [2024-12-05T12:44:39.760Z] =================================================================================================================== 00:13:39.907 [2024-12-05T12:44:39.760Z] Total : 37852.33 147.86 0.00 0.00 0.00 0.00 0.00 00:13:39.907 00:13:40.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.844 Nvme0n1 : 10.00 37804.60 147.67 0.00 0.00 0.00 0.00 0.00 00:13:40.844 [2024-12-05T12:44:40.697Z] =================================================================================================================== 00:13:40.844 [2024-12-05T12:44:40.697Z] Total : 37804.60 147.67 0.00 0.00 0.00 0.00 0.00 00:13:40.844 00:13:40.844 00:13:40.844 Latency(us) 00:13:40.844 [2024-12-05T12:44:40.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.844 Nvme0n1 : 10.00 37802.25 147.67 0.00 0.00 3383.14 2051.03 13981.01 00:13:40.844 [2024-12-05T12:44:40.697Z] =================================================================================================================== 00:13:40.844 [2024-12-05T12:44:40.697Z] Total : 37802.25 147.67 0.00 0.00 3383.14 2051.03 13981.01 00:13:40.844 { 00:13:40.844 "results": [ 00:13:40.844 { 00:13:40.844 "job": "Nvme0n1", 00:13:40.844 "core_mask": "0x2", 00:13:40.844 "workload": "randwrite", 00:13:40.844 "status": "finished", 00:13:40.844 "queue_depth": 128, 00:13:40.844 "io_size": 4096, 00:13:40.844 "runtime": 10.003135, 00:13:40.844 "iops": 37802.248994940084, 00:13:40.844 "mibps": 147.6650351364847, 00:13:40.844 "io_failed": 0, 00:13:40.844 "io_timeout": 0, 00:13:40.844 "avg_latency_us": 3383.1427874474025, 00:13:40.844 "min_latency_us": 2051.034074074074, 00:13:40.844 "max_latency_us": 13981.013333333334 00:13:40.844 } 00:13:40.844 ], 00:13:40.844 "core_count": 1 00:13:40.844 } 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1601898 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1601898 ']' 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1601898 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1601898 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1601898' 00:13:41.103 killing process with pid 1601898 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1601898 00:13:41.103 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.103 00:13:41.103 Latency(us) 00:13:41.103 [2024-12-05T12:44:40.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.103 [2024-12-05T12:44:40.956Z] =================================================================================================================== 00:13:41.103 [2024-12-05T12:44:40.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1601898 00:13:41.103 13:44:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:41.362 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:41.621 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:41.621 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:41.621 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:41.621 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:41.621 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:41.881 [2024-12-05 13:44:41.603501] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:41.881 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:42.141 request: 00:13:42.141 { 00:13:42.141 "uuid": "67d8005f-4322-43f7-a62c-6015558f865d", 00:13:42.141 "method": "bdev_lvol_get_lvstores", 00:13:42.141 "req_id": 1 00:13:42.141 } 00:13:42.141 Got JSON-RPC error response 00:13:42.141 response: 00:13:42.141 { 00:13:42.141 "code": -19, 00:13:42.141 "message": "No such device" 00:13:42.141 } 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:42.141 aio_bdev 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2e815413-46b0-406c-a366-3d17cf4904e5 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=2e815413-46b0-406c-a366-3d17cf4904e5 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.141 13:44:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:42.401 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2e815413-46b0-406c-a366-3d17cf4904e5 -t 2000 00:13:42.660 [ 00:13:42.660 { 00:13:42.660 "name": "2e815413-46b0-406c-a366-3d17cf4904e5", 00:13:42.660 "aliases": [ 00:13:42.660 "lvs/lvol" 00:13:42.660 ], 00:13:42.660 "product_name": "Logical Volume", 00:13:42.660 "block_size": 4096, 00:13:42.660 "num_blocks": 38912, 00:13:42.660 "uuid": "2e815413-46b0-406c-a366-3d17cf4904e5", 00:13:42.660 "assigned_rate_limits": { 00:13:42.660 "rw_ios_per_sec": 0, 00:13:42.660 "rw_mbytes_per_sec": 0, 00:13:42.660 "r_mbytes_per_sec": 0, 00:13:42.660 "w_mbytes_per_sec": 0 00:13:42.660 }, 00:13:42.660 "claimed": false, 00:13:42.660 "zoned": false, 00:13:42.660 "supported_io_types": { 00:13:42.660 "read": true, 00:13:42.660 "write": true, 00:13:42.660 "unmap": true, 00:13:42.660 "flush": false, 00:13:42.660 "reset": true, 00:13:42.660 "nvme_admin": false, 00:13:42.660 "nvme_io": false, 00:13:42.660 "nvme_io_md": false, 00:13:42.660 "write_zeroes": true, 00:13:42.660 "zcopy": false, 00:13:42.660 "get_zone_info": false, 00:13:42.660 "zone_management": false, 00:13:42.660 "zone_append": false, 00:13:42.660 "compare": false, 00:13:42.660 "compare_and_write": false, 00:13:42.660 "abort": false, 00:13:42.660 "seek_hole": true, 00:13:42.660 "seek_data": true, 00:13:42.660 "copy": false, 00:13:42.660 "nvme_iov_md": false 00:13:42.660 }, 00:13:42.660 "driver_specific": { 00:13:42.660 "lvol": { 00:13:42.660 "lvol_store_uuid": "67d8005f-4322-43f7-a62c-6015558f865d", 00:13:42.660 "base_bdev": "aio_bdev", 00:13:42.660 "thin_provision": false, 00:13:42.660 "num_allocated_clusters": 38, 00:13:42.660 "snapshot": false, 00:13:42.660 "clone": false, 00:13:42.660 "esnap_clone": false 00:13:42.660 } 00:13:42.660 } 00:13:42.660 } 00:13:42.660 ] 00:13:42.660 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:13:42.660 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:42.660 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:42.660 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:42.660 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:42.660 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:42.920 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:42.920 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2e815413-46b0-406c-a366-3d17cf4904e5 00:13:43.180 13:44:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67d8005f-4322-43f7-a62c-6015558f865d 00:13:43.439 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:43.439 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:43.439 00:13:43.439 real 0m15.061s 00:13:43.439 user 0m15.040s 00:13:43.439 sys 0m0.949s 00:13:43.439 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.439 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:43.439 ************************************ 00:13:43.439 END TEST lvs_grow_clean 00:13:43.439 ************************************ 00:13:43.439 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:43.439 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:43.439 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.439 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:43.698 ************************************ 00:13:43.698 START TEST lvs_grow_dirty 00:13:43.698 ************************************ 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:43.698 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:43.957 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:43.958 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:43.958 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:44.216 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:44.217 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:44.217 13:44:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ab535d0f-f60f-46c2-a875-1f4f188ea360 lvol 150 00:13:44.217 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d96377e9-7b3a-4af8-adbf-5951f7da6d6c 00:13:44.217 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.217 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:44.476 [2024-12-05 13:44:44.201085] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:44.476 [2024-12-05 13:44:44.201132] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:44.476 true 00:13:44.476 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:44.476 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:44.735 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:44.735 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:44.735 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d96377e9-7b3a-4af8-adbf-5951f7da6d6c 00:13:44.994 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:44.994 [2024-12-05 13:44:44.843127] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:45.255 13:44:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1604827 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1604827 /var/tmp/bdevperf.sock 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1604827 ']' 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.255 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:45.255 [2024-12-05 13:44:45.065209] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:45.255 [2024-12-05 13:44:45.065255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604827 ] 00:13:45.514 [2024-12-05 13:44:45.137261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.514 [2024-12-05 13:44:45.158823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.514 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.514 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:45.514 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:45.771 Nvme0n1 00:13:45.771 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:46.056 [ 00:13:46.056 { 00:13:46.056 "name": "Nvme0n1", 00:13:46.056 "aliases": [ 00:13:46.056 "d96377e9-7b3a-4af8-adbf-5951f7da6d6c" 00:13:46.056 ], 00:13:46.056 "product_name": "NVMe disk", 00:13:46.056 "block_size": 4096, 00:13:46.056 "num_blocks": 38912, 00:13:46.056 "uuid": "d96377e9-7b3a-4af8-adbf-5951f7da6d6c", 00:13:46.056 "numa_id": 0, 00:13:46.056 "assigned_rate_limits": { 00:13:46.056 "rw_ios_per_sec": 0, 00:13:46.056 "rw_mbytes_per_sec": 0, 00:13:46.056 "r_mbytes_per_sec": 0, 00:13:46.056 "w_mbytes_per_sec": 0 00:13:46.056 }, 00:13:46.056 "claimed": false, 00:13:46.056 "zoned": false, 00:13:46.056 "supported_io_types": { 00:13:46.056 "read": true, 00:13:46.056 "write": true, 00:13:46.056 "unmap": true, 00:13:46.056 "flush": true, 00:13:46.056 "reset": true, 00:13:46.056 "nvme_admin": true, 00:13:46.056 "nvme_io": true, 00:13:46.056 "nvme_io_md": false, 00:13:46.056 "write_zeroes": true, 00:13:46.056 "zcopy": false, 00:13:46.056 "get_zone_info": false, 00:13:46.056 "zone_management": false, 00:13:46.056 "zone_append": false, 00:13:46.056 "compare": true, 00:13:46.056 "compare_and_write": true, 00:13:46.056 "abort": true, 00:13:46.056 "seek_hole": false, 00:13:46.056 "seek_data": false, 00:13:46.056 "copy": true, 00:13:46.056 "nvme_iov_md": false 00:13:46.056 }, 00:13:46.056 "memory_domains": [ 00:13:46.056 { 00:13:46.056 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:46.056 "dma_device_type": 0 00:13:46.056 } 00:13:46.056 ], 00:13:46.056 "driver_specific": { 00:13:46.056 "nvme": [ 00:13:46.056 { 00:13:46.056 "trid": { 00:13:46.056 "trtype": "RDMA", 00:13:46.056 "adrfam": "IPv4", 00:13:46.056 "traddr": "192.168.100.8", 00:13:46.056 "trsvcid": "4420", 00:13:46.056 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:46.056 }, 00:13:46.056 "ctrlr_data": { 00:13:46.056 "cntlid": 1, 00:13:46.056 "vendor_id": "0x8086", 00:13:46.056 "model_number": "SPDK bdev Controller", 00:13:46.056 "serial_number": "SPDK0", 00:13:46.056 "firmware_revision": "25.01", 00:13:46.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:46.056 "oacs": { 00:13:46.056 "security": 0, 00:13:46.056 "format": 0, 00:13:46.056 "firmware": 0, 00:13:46.056 "ns_manage": 0 00:13:46.056 }, 00:13:46.056 "multi_ctrlr": true, 00:13:46.056 "ana_reporting": false 00:13:46.056 }, 00:13:46.056 "vs": { 00:13:46.056 "nvme_version": "1.3" 00:13:46.056 }, 00:13:46.056 "ns_data": { 00:13:46.056 "id": 1, 00:13:46.056 "can_share": true 00:13:46.056 } 00:13:46.056 } 00:13:46.056 ], 00:13:46.056 "mp_policy": "active_passive" 00:13:46.056 } 00:13:46.056 } 00:13:46.056 ] 00:13:46.056 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1604860 00:13:46.056 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:46.056 13:44:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:46.056 Running I/O for 10 seconds... 00:13:46.991 Latency(us) 00:13:46.991 [2024-12-05T12:44:46.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:46.991 Nvme0n1 : 1.00 37059.00 144.76 0.00 0.00 0.00 0.00 0.00 00:13:46.991 [2024-12-05T12:44:46.844Z] =================================================================================================================== 00:13:46.991 [2024-12-05T12:44:46.844Z] Total : 37059.00 144.76 0.00 0.00 0.00 0.00 0.00 00:13:46.991 00:13:47.927 13:44:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:47.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.927 Nvme0n1 : 2.00 37393.00 146.07 0.00 0.00 0.00 0.00 0.00 00:13:47.927 [2024-12-05T12:44:47.780Z] =================================================================================================================== 00:13:47.927 [2024-12-05T12:44:47.780Z] Total : 37393.00 146.07 0.00 0.00 0.00 0.00 0.00 00:13:47.927 00:13:48.185 true 00:13:48.185 13:44:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:48.185 13:44:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:48.457 13:44:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:48.457 13:44:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:48.457 13:44:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1604860 00:13:49.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.075 Nvme0n1 : 3.00 37461.67 146.33 0.00 0.00 0.00 0.00 0.00 00:13:49.075 [2024-12-05T12:44:48.928Z] =================================================================================================================== 00:13:49.075 [2024-12-05T12:44:48.928Z] Total : 37461.67 146.33 0.00 0.00 0.00 0.00 0.00 00:13:49.075 00:13:50.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.107 Nvme0n1 : 4.00 37560.25 146.72 0.00 0.00 0.00 0.00 0.00 00:13:50.107 [2024-12-05T12:44:49.960Z] =================================================================================================================== 00:13:50.107 [2024-12-05T12:44:49.960Z] Total : 37560.25 146.72 0.00 0.00 0.00 0.00 0.00 00:13:50.107 00:13:51.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.047 Nvme0n1 : 5.00 37536.40 146.63 0.00 0.00 0.00 0.00 0.00 00:13:51.047 [2024-12-05T12:44:50.900Z] =================================================================================================================== 00:13:51.047 [2024-12-05T12:44:50.900Z] Total : 37536.40 146.63 0.00 0.00 0.00 0.00 0.00 00:13:51.047 00:13:51.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.981 Nvme0n1 : 6.00 37626.50 146.98 0.00 0.00 0.00 0.00 0.00 00:13:51.981 [2024-12-05T12:44:51.834Z] =================================================================================================================== 00:13:51.981 [2024-12-05T12:44:51.834Z] Total : 37626.50 146.98 0.00 0.00 0.00 0.00 0.00 00:13:51.981 00:13:53.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.357 Nvme0n1 : 7.00 37677.14 147.18 0.00 0.00 0.00 0.00 0.00 00:13:53.357 [2024-12-05T12:44:53.210Z] =================================================================================================================== 00:13:53.357 [2024-12-05T12:44:53.210Z] Total : 37677.14 147.18 0.00 0.00 0.00 0.00 0.00 00:13:53.357 00:13:54.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.293 Nvme0n1 : 8.00 37723.62 147.36 0.00 0.00 0.00 0.00 0.00 00:13:54.293 [2024-12-05T12:44:54.146Z] =================================================================================================================== 00:13:54.293 [2024-12-05T12:44:54.146Z] Total : 37723.62 147.36 0.00 0.00 0.00 0.00 0.00 00:13:54.293 00:13:55.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.228 Nvme0n1 : 9.00 37770.22 147.54 0.00 0.00 0.00 0.00 0.00 00:13:55.228 [2024-12-05T12:44:55.081Z] =================================================================================================================== 00:13:55.228 [2024-12-05T12:44:55.081Z] Total : 37770.22 147.54 0.00 0.00 0.00 0.00 0.00 00:13:55.228 00:13:56.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.161 Nvme0n1 : 10.00 37802.00 147.66 0.00 0.00 0.00 0.00 0.00 00:13:56.161 [2024-12-05T12:44:56.014Z] =================================================================================================================== 00:13:56.161 [2024-12-05T12:44:56.014Z] Total : 37802.00 147.66 0.00 0.00 0.00 0.00 0.00 00:13:56.161 00:13:56.161 00:13:56.161 Latency(us) 00:13:56.161 [2024-12-05T12:44:56.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.161 Nvme0n1 : 10.00 37801.94 147.66 0.00 0.00 3383.35 2123.85 12087.75 00:13:56.161 [2024-12-05T12:44:56.014Z] =================================================================================================================== 00:13:56.161 [2024-12-05T12:44:56.014Z] Total : 37801.94 147.66 0.00 0.00 3383.35 2123.85 12087.75 00:13:56.161 { 00:13:56.161 "results": [ 00:13:56.161 { 00:13:56.161 "job": "Nvme0n1", 00:13:56.161 "core_mask": "0x2", 00:13:56.161 "workload": "randwrite", 00:13:56.161 "status": "finished", 00:13:56.161 "queue_depth": 128, 00:13:56.161 "io_size": 4096, 00:13:56.161 "runtime": 10.003165, 00:13:56.161 "iops": 37801.935687354955, 00:13:56.161 "mibps": 147.6638112787303, 00:13:56.161 "io_failed": 0, 00:13:56.161 "io_timeout": 0, 00:13:56.161 "avg_latency_us": 3383.3511316483364, 00:13:56.161 "min_latency_us": 2123.8518518518517, 00:13:56.161 "max_latency_us": 12087.75111111111 00:13:56.161 } 00:13:56.161 ], 00:13:56.161 "core_count": 1 00:13:56.161 } 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1604827 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1604827 ']' 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1604827 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1604827 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1604827' 00:13:56.161 killing process with pid 1604827 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1604827 00:13:56.161 Received shutdown signal, test time was about 10.000000 seconds 00:13:56.161 00:13:56.161 Latency(us) 00:13:56.161 [2024-12-05T12:44:56.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.161 [2024-12-05T12:44:56.014Z] =================================================================================================================== 00:13:56.161 [2024-12-05T12:44:56.014Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.161 13:44:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1604827 00:13:56.418 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:56.419 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:56.677 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:56.677 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1601537 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1601537 00:13:56.935 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1601537 Killed "${NVMF_APP[@]}" "$@" 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1606977 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1606977 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1606977 ']' 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.935 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:56.935 [2024-12-05 13:44:56.709473] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:56.935 [2024-12-05 13:44:56.709522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.935 [2024-12-05 13:44:56.784640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.194 [2024-12-05 13:44:56.804719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.194 [2024-12-05 13:44:56.804752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.194 [2024-12-05 13:44:56.804759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.194 [2024-12-05 13:44:56.804765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.194 [2024-12-05 13:44:56.804770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.194 [2024-12-05 13:44:56.805238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.194 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.194 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:57.194 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.194 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.194 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:57.194 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.194 13:44:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:57.453 [2024-12-05 13:44:57.093393] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:57.453 [2024-12-05 13:44:57.093481] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:57.453 [2024-12-05 13:44:57.093505] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d96377e9-7b3a-4af8-adbf-5951f7da6d6c 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d96377e9-7b3a-4af8-adbf-5951f7da6d6c 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:57.453 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d96377e9-7b3a-4af8-adbf-5951f7da6d6c -t 2000 00:13:57.712 [ 00:13:57.712 { 00:13:57.712 "name": "d96377e9-7b3a-4af8-adbf-5951f7da6d6c", 00:13:57.712 "aliases": [ 00:13:57.712 "lvs/lvol" 00:13:57.712 ], 00:13:57.712 "product_name": "Logical Volume", 00:13:57.712 "block_size": 4096, 00:13:57.712 "num_blocks": 38912, 00:13:57.712 "uuid": "d96377e9-7b3a-4af8-adbf-5951f7da6d6c", 00:13:57.712 "assigned_rate_limits": { 00:13:57.712 "rw_ios_per_sec": 0, 00:13:57.712 "rw_mbytes_per_sec": 0, 00:13:57.712 "r_mbytes_per_sec": 0, 00:13:57.712 "w_mbytes_per_sec": 0 00:13:57.712 }, 00:13:57.712 "claimed": false, 00:13:57.712 "zoned": false, 00:13:57.712 "supported_io_types": { 00:13:57.712 "read": true, 00:13:57.712 "write": true, 00:13:57.712 "unmap": true, 00:13:57.712 "flush": false, 00:13:57.712 "reset": true, 00:13:57.712 "nvme_admin": false, 00:13:57.712 "nvme_io": false, 00:13:57.712 "nvme_io_md": false, 00:13:57.712 "write_zeroes": true, 00:13:57.712 "zcopy": false, 00:13:57.712 "get_zone_info": false, 00:13:57.712 "zone_management": false, 00:13:57.712 "zone_append": false, 00:13:57.712 "compare": false, 00:13:57.712 "compare_and_write": false, 00:13:57.712 "abort": false, 00:13:57.712 "seek_hole": true, 00:13:57.712 "seek_data": true, 00:13:57.712 "copy": false, 00:13:57.712 "nvme_iov_md": false 00:13:57.712 }, 00:13:57.712 "driver_specific": { 00:13:57.712 "lvol": { 00:13:57.712 "lvol_store_uuid": "ab535d0f-f60f-46c2-a875-1f4f188ea360", 00:13:57.712 "base_bdev": "aio_bdev", 00:13:57.712 "thin_provision": false, 00:13:57.712 "num_allocated_clusters": 38, 00:13:57.712 "snapshot": false, 00:13:57.712 "clone": false, 00:13:57.712 "esnap_clone": false 00:13:57.712 } 00:13:57.712 } 00:13:57.712 } 00:13:57.712 ] 00:13:57.712 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:57.712 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:57.712 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:57.970 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:57.970 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:57.970 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:57.970 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:57.970 13:44:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:58.230 [2024-12-05 13:44:57.958032] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:58.230 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:58.489 request: 00:13:58.489 { 00:13:58.489 "uuid": "ab535d0f-f60f-46c2-a875-1f4f188ea360", 00:13:58.489 "method": "bdev_lvol_get_lvstores", 00:13:58.489 "req_id": 1 00:13:58.489 } 00:13:58.489 Got JSON-RPC error response 00:13:58.489 response: 00:13:58.489 { 00:13:58.489 "code": -19, 00:13:58.489 "message": "No such device" 00:13:58.489 } 00:13:58.489 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:13:58.489 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.489 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.489 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.489 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:58.748 aio_bdev 00:13:58.748 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d96377e9-7b3a-4af8-adbf-5951f7da6d6c 00:13:58.748 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d96377e9-7b3a-4af8-adbf-5951f7da6d6c 00:13:58.748 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.748 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:58.748 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.748 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.748 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:58.748 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d96377e9-7b3a-4af8-adbf-5951f7da6d6c -t 2000 00:13:59.007 [ 00:13:59.007 { 00:13:59.007 "name": "d96377e9-7b3a-4af8-adbf-5951f7da6d6c", 00:13:59.007 "aliases": [ 00:13:59.007 "lvs/lvol" 00:13:59.007 ], 00:13:59.007 "product_name": "Logical Volume", 00:13:59.007 "block_size": 4096, 00:13:59.007 "num_blocks": 38912, 00:13:59.007 "uuid": "d96377e9-7b3a-4af8-adbf-5951f7da6d6c", 00:13:59.007 "assigned_rate_limits": { 00:13:59.007 "rw_ios_per_sec": 0, 00:13:59.007 "rw_mbytes_per_sec": 0, 00:13:59.007 "r_mbytes_per_sec": 0, 00:13:59.007 "w_mbytes_per_sec": 0 00:13:59.007 }, 00:13:59.007 "claimed": false, 00:13:59.007 "zoned": false, 00:13:59.007 "supported_io_types": { 00:13:59.007 "read": true, 00:13:59.007 "write": true, 00:13:59.007 "unmap": true, 00:13:59.007 "flush": false, 00:13:59.007 "reset": true, 00:13:59.007 "nvme_admin": false, 00:13:59.007 "nvme_io": false, 00:13:59.007 "nvme_io_md": false, 00:13:59.007 "write_zeroes": true, 00:13:59.007 "zcopy": false, 00:13:59.007 "get_zone_info": false, 00:13:59.007 "zone_management": false, 00:13:59.007 "zone_append": false, 00:13:59.007 "compare": false, 00:13:59.007 "compare_and_write": false, 00:13:59.007 "abort": false, 00:13:59.007 "seek_hole": true, 00:13:59.007 "seek_data": true, 00:13:59.007 "copy": false, 00:13:59.007 "nvme_iov_md": false 00:13:59.007 }, 00:13:59.007 "driver_specific": { 00:13:59.007 "lvol": { 00:13:59.007 "lvol_store_uuid": "ab535d0f-f60f-46c2-a875-1f4f188ea360", 00:13:59.007 "base_bdev": "aio_bdev", 00:13:59.007 "thin_provision": false, 00:13:59.007 "num_allocated_clusters": 38, 00:13:59.007 "snapshot": false, 00:13:59.007 "clone": false, 00:13:59.007 "esnap_clone": false 00:13:59.007 } 00:13:59.007 } 00:13:59.007 } 00:13:59.007 ] 00:13:59.007 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:59.007 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:59.007 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:59.267 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:59.267 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:59.267 13:44:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:59.267 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:59.267 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d96377e9-7b3a-4af8-adbf-5951f7da6d6c 00:13:59.525 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab535d0f-f60f-46c2-a875-1f4f188ea360 00:13:59.784 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:00.044 00:14:00.044 real 0m16.381s 00:14:00.044 user 0m43.187s 00:14:00.044 sys 0m2.756s 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:00.044 ************************************ 00:14:00.044 END TEST lvs_grow_dirty 00:14:00.044 ************************************ 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:00.044 nvmf_trace.0 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:00.044 rmmod nvme_rdma 00:14:00.044 rmmod nvme_fabrics 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1606977 ']' 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1606977 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1606977 ']' 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1606977 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1606977 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1606977' 00:14:00.044 killing process with pid 1606977 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1606977 00:14:00.044 13:44:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1606977 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:00.303 00:14:00.303 real 0m38.720s 00:14:00.303 user 1m3.616s 00:14:00.303 sys 0m8.789s 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:00.303 ************************************ 00:14:00.303 END TEST nvmf_lvs_grow 00:14:00.303 ************************************ 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:00.303 ************************************ 00:14:00.303 START TEST nvmf_bdev_io_wait 00:14:00.303 ************************************ 00:14:00.303 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:00.563 * Looking for test storage... 00:14:00.563 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:00.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.563 --rc genhtml_branch_coverage=1 00:14:00.563 --rc genhtml_function_coverage=1 00:14:00.563 --rc genhtml_legend=1 00:14:00.563 --rc geninfo_all_blocks=1 00:14:00.563 --rc geninfo_unexecuted_blocks=1 00:14:00.563 00:14:00.563 ' 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:00.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.563 --rc genhtml_branch_coverage=1 00:14:00.563 --rc genhtml_function_coverage=1 00:14:00.563 --rc genhtml_legend=1 00:14:00.563 --rc geninfo_all_blocks=1 00:14:00.563 --rc geninfo_unexecuted_blocks=1 00:14:00.563 00:14:00.563 ' 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:00.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.563 --rc genhtml_branch_coverage=1 00:14:00.563 --rc genhtml_function_coverage=1 00:14:00.563 --rc genhtml_legend=1 00:14:00.563 --rc geninfo_all_blocks=1 00:14:00.563 --rc geninfo_unexecuted_blocks=1 00:14:00.563 00:14:00.563 ' 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:00.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.563 --rc genhtml_branch_coverage=1 00:14:00.563 --rc genhtml_function_coverage=1 00:14:00.563 --rc genhtml_legend=1 00:14:00.563 --rc geninfo_all_blocks=1 00:14:00.563 --rc geninfo_unexecuted_blocks=1 00:14:00.563 00:14:00.563 ' 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.563 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.564 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:14:00.564 13:45:00 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.136 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.136 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.136 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.136 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.136 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.136 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.136 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.136 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:07.137 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:07.137 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:07.137 Found net devices under 0000:18:00.0: mlx_0_0 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:07.137 Found net devices under 0000:18:00.1: mlx_0_1 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:07.137 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:07.138 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:07.138 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:07.138 altname enp24s0f0np0 00:14:07.138 altname ens785f0np0 00:14:07.138 inet 192.168.100.8/24 scope global mlx_0_0 00:14:07.138 valid_lft forever preferred_lft forever 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:07.138 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:07.138 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:07.138 altname enp24s0f1np1 00:14:07.138 altname ens785f1np1 00:14:07.138 inet 192.168.100.9/24 scope global mlx_0_1 00:14:07.138 valid_lft forever preferred_lft forever 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:07.138 192.168.100.9' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:07.138 192.168.100.9' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:07.138 192.168.100.9' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1611440 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1611440 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1611440 ']' 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 [2024-12-05 13:45:06.486826] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:07.138 [2024-12-05 13:45:06.486871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.138 [2024-12-05 13:45:06.560883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.138 [2024-12-05 13:45:06.583502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.138 [2024-12-05 13:45:06.583543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.138 [2024-12-05 13:45:06.583549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.138 [2024-12-05 13:45:06.583554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.138 [2024-12-05 13:45:06.583559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.138 [2024-12-05 13:45:06.584927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.138 [2024-12-05 13:45:06.585037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.138 [2024-12-05 13:45:06.585162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.138 [2024-12-05 13:45:06.585163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 [2024-12-05 13:45:06.741606] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x617ff0/0x61c4e0) succeed. 00:14:07.139 [2024-12-05 13:45:06.749517] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x619680/0x65db80) succeed. 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 Malloc0 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:07.139 [2024-12-05 13:45:06.918997] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1611729 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1611731 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:07.139 { 00:14:07.139 "params": { 00:14:07.139 "name": "Nvme$subsystem", 00:14:07.139 "trtype": "$TEST_TRANSPORT", 00:14:07.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.139 "adrfam": "ipv4", 00:14:07.139 "trsvcid": "$NVMF_PORT", 00:14:07.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.139 "hdgst": ${hdgst:-false}, 00:14:07.139 "ddgst": ${ddgst:-false} 00:14:07.139 }, 00:14:07.139 "method": "bdev_nvme_attach_controller" 00:14:07.139 } 00:14:07.139 EOF 00:14:07.139 )") 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1611733 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1611736 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:07.139 { 00:14:07.139 "params": { 00:14:07.139 "name": "Nvme$subsystem", 00:14:07.139 "trtype": "$TEST_TRANSPORT", 00:14:07.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.139 "adrfam": "ipv4", 00:14:07.139 "trsvcid": "$NVMF_PORT", 00:14:07.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.139 "hdgst": ${hdgst:-false}, 00:14:07.139 "ddgst": ${ddgst:-false} 00:14:07.139 }, 00:14:07.139 "method": "bdev_nvme_attach_controller" 00:14:07.139 } 00:14:07.139 EOF 00:14:07.139 )") 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:07.139 { 00:14:07.139 "params": { 00:14:07.139 "name": "Nvme$subsystem", 00:14:07.139 "trtype": "$TEST_TRANSPORT", 00:14:07.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.139 "adrfam": "ipv4", 00:14:07.139 "trsvcid": "$NVMF_PORT", 00:14:07.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.139 "hdgst": ${hdgst:-false}, 00:14:07.139 "ddgst": ${ddgst:-false} 00:14:07.139 }, 00:14:07.139 "method": "bdev_nvme_attach_controller" 00:14:07.139 } 00:14:07.139 EOF 00:14:07.139 )") 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:07.139 { 00:14:07.139 "params": { 00:14:07.139 "name": "Nvme$subsystem", 00:14:07.139 "trtype": "$TEST_TRANSPORT", 00:14:07.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:07.139 "adrfam": "ipv4", 00:14:07.139 "trsvcid": "$NVMF_PORT", 00:14:07.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:07.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:07.139 "hdgst": ${hdgst:-false}, 00:14:07.139 "ddgst": ${ddgst:-false} 00:14:07.139 }, 00:14:07.139 "method": "bdev_nvme_attach_controller" 00:14:07.139 } 00:14:07.139 EOF 00:14:07.139 )") 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1611729 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:07.139 "params": { 00:14:07.139 "name": "Nvme1", 00:14:07.139 "trtype": "rdma", 00:14:07.139 "traddr": "192.168.100.8", 00:14:07.139 "adrfam": "ipv4", 00:14:07.139 "trsvcid": "4420", 00:14:07.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.139 "hdgst": false, 00:14:07.139 "ddgst": false 00:14:07.139 }, 00:14:07.139 "method": "bdev_nvme_attach_controller" 00:14:07.139 }' 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:07.139 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:07.139 "params": { 00:14:07.139 "name": "Nvme1", 00:14:07.139 "trtype": "rdma", 00:14:07.139 "traddr": "192.168.100.8", 00:14:07.139 "adrfam": "ipv4", 00:14:07.139 "trsvcid": "4420", 00:14:07.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.139 "hdgst": false, 00:14:07.139 "ddgst": false 00:14:07.139 }, 00:14:07.139 "method": "bdev_nvme_attach_controller" 00:14:07.139 }' 00:14:07.140 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:07.140 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:07.140 "params": { 00:14:07.140 "name": "Nvme1", 00:14:07.140 "trtype": "rdma", 00:14:07.140 "traddr": "192.168.100.8", 00:14:07.140 "adrfam": "ipv4", 00:14:07.140 "trsvcid": "4420", 00:14:07.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.140 "hdgst": false, 00:14:07.140 "ddgst": false 00:14:07.140 }, 00:14:07.140 "method": "bdev_nvme_attach_controller" 00:14:07.140 }' 00:14:07.140 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:07.140 13:45:06 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:07.140 "params": { 00:14:07.140 "name": "Nvme1", 00:14:07.140 "trtype": "rdma", 00:14:07.140 "traddr": "192.168.100.8", 00:14:07.140 "adrfam": "ipv4", 00:14:07.140 "trsvcid": "4420", 00:14:07.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:07.140 "hdgst": false, 00:14:07.140 "ddgst": false 00:14:07.140 }, 00:14:07.140 "method": "bdev_nvme_attach_controller" 00:14:07.140 }' 00:14:07.140 [2024-12-05 13:45:06.967968] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:07.140 [2024-12-05 13:45:06.968013] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:07.140 [2024-12-05 13:45:06.970769] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:07.140 [2024-12-05 13:45:06.970810] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:07.140 [2024-12-05 13:45:06.970850] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:07.140 [2024-12-05 13:45:06.970886] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:07.140 [2024-12-05 13:45:06.972198] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:07.140 [2024-12-05 13:45:06.972239] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:07.399 [2024-12-05 13:45:07.146806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.399 [2024-12-05 13:45:07.161435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:07.399 [2024-12-05 13:45:07.236787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.658 [2024-12-05 13:45:07.258588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:07.658 [2024-12-05 13:45:07.293035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.658 [2024-12-05 13:45:07.307466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:07.658 [2024-12-05 13:45:07.352379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.658 [2024-12-05 13:45:07.366795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:14:07.658 Running I/O for 1 seconds... 00:14:07.658 Running I/O for 1 seconds... 00:14:07.658 Running I/O for 1 seconds... 00:14:07.658 Running I/O for 1 seconds... 00:14:09.039 17072.00 IOPS, 66.69 MiB/s [2024-12-05T12:45:08.892Z] 267768.00 IOPS, 1045.97 MiB/s [2024-12-05T12:45:08.892Z] 15979.00 IOPS, 62.42 MiB/s 00:14:09.039 Latency(us) 00:14:09.039 [2024-12-05T12:45:08.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.039 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:09.039 Nvme1n1 : 1.00 267401.64 1044.54 0.00 0.00 476.00 200.25 1686.95 00:14:09.039 [2024-12-05T12:45:08.892Z] =================================================================================================================== 00:14:09.039 [2024-12-05T12:45:08.892Z] Total : 267401.64 1044.54 0.00 0.00 476.00 200.25 1686.95 00:14:09.039 00:14:09.039 Latency(us) 00:14:09.039 [2024-12-05T12:45:08.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.039 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:09.039 Nvme1n1 : 1.01 17113.38 66.85 0.00 0.00 7456.92 4587.52 18447.17 00:14:09.039 [2024-12-05T12:45:08.892Z] =================================================================================================================== 00:14:09.039 [2024-12-05T12:45:08.892Z] Total : 17113.38 66.85 0.00 0.00 7456.92 4587.52 18447.17 00:14:09.039 00:14:09.039 Latency(us) 00:14:09.039 [2024-12-05T12:45:08.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.039 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:09.039 Nvme1n1 : 1.01 16023.49 62.59 0.00 0.00 7964.19 4805.97 17476.27 00:14:09.039 [2024-12-05T12:45:08.892Z] =================================================================================================================== 00:14:09.039 [2024-12-05T12:45:08.892Z] Total : 16023.49 62.59 0.00 0.00 7964.19 4805.97 17476.27 00:14:09.039 17650.00 IOPS, 68.95 MiB/s 00:14:09.039 Latency(us) 00:14:09.039 [2024-12-05T12:45:08.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.039 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:09.039 Nvme1n1 : 1.01 17743.64 69.31 0.00 0.00 7198.44 2706.39 17670.45 00:14:09.039 [2024-12-05T12:45:08.892Z] =================================================================================================================== 00:14:09.039 [2024-12-05T12:45:08.892Z] Total : 17743.64 69.31 0.00 0.00 7198.44 2706.39 17670.45 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1611731 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1611733 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1611736 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:09.039 rmmod nvme_rdma 00:14:09.039 rmmod nvme_fabrics 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1611440 ']' 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1611440 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1611440 ']' 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1611440 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611440 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611440' 00:14:09.039 killing process with pid 1611440 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1611440 00:14:09.039 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1611440 00:14:09.298 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.299 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:09.299 00:14:09.299 real 0m8.892s 00:14:09.299 user 0m16.374s 00:14:09.299 sys 0m5.827s 00:14:09.299 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.299 13:45:08 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:09.299 ************************************ 00:14:09.299 END TEST nvmf_bdev_io_wait 00:14:09.299 ************************************ 00:14:09.299 13:45:09 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:14:09.299 13:45:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.299 13:45:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.299 13:45:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:09.299 ************************************ 00:14:09.299 START TEST nvmf_queue_depth 00:14:09.299 ************************************ 00:14:09.299 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:14:09.299 * Looking for test storage... 00:14:09.299 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:09.299 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:09.299 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:14:09.299 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.559 --rc genhtml_branch_coverage=1 00:14:09.559 --rc genhtml_function_coverage=1 00:14:09.559 --rc genhtml_legend=1 00:14:09.559 --rc geninfo_all_blocks=1 00:14:09.559 --rc geninfo_unexecuted_blocks=1 00:14:09.559 00:14:09.559 ' 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.559 --rc genhtml_branch_coverage=1 00:14:09.559 --rc genhtml_function_coverage=1 00:14:09.559 --rc genhtml_legend=1 00:14:09.559 --rc geninfo_all_blocks=1 00:14:09.559 --rc geninfo_unexecuted_blocks=1 00:14:09.559 00:14:09.559 ' 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.559 --rc genhtml_branch_coverage=1 00:14:09.559 --rc genhtml_function_coverage=1 00:14:09.559 --rc genhtml_legend=1 00:14:09.559 --rc geninfo_all_blocks=1 00:14:09.559 --rc geninfo_unexecuted_blocks=1 00:14:09.559 00:14:09.559 ' 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.559 --rc genhtml_branch_coverage=1 00:14:09.559 --rc genhtml_function_coverage=1 00:14:09.559 --rc genhtml_legend=1 00:14:09.559 --rc geninfo_all_blocks=1 00:14:09.559 --rc geninfo_unexecuted_blocks=1 00:14:09.559 00:14:09.559 ' 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.559 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.560 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.560 13:45:09 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.129 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:16.130 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:16.130 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:16.130 Found net devices under 0000:18:00.0: mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:16.130 Found net devices under 0000:18:00.1: mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:16.130 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:16.130 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:16.130 altname enp24s0f0np0 00:14:16.130 altname ens785f0np0 00:14:16.130 inet 192.168.100.8/24 scope global mlx_0_0 00:14:16.130 valid_lft forever preferred_lft forever 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:16.130 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:16.130 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:16.130 altname enp24s0f1np1 00:14:16.130 altname ens785f1np1 00:14:16.130 inet 192.168.100.9/24 scope global mlx_0_1 00:14:16.130 valid_lft forever preferred_lft forever 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:16.130 192.168.100.9' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:16.130 192.168.100.9' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:16.130 192.168.100.9' 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:16.130 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1615301 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1615301 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1615301 ']' 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 [2024-12-05 13:45:15.434787] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:16.131 [2024-12-05 13:45:15.434834] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.131 [2024-12-05 13:45:15.511476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.131 [2024-12-05 13:45:15.532498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.131 [2024-12-05 13:45:15.532534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.131 [2024-12-05 13:45:15.532540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.131 [2024-12-05 13:45:15.532545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.131 [2024-12-05 13:45:15.532552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.131 [2024-12-05 13:45:15.533036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 [2024-12-05 13:45:15.693339] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1858000/0x185c4f0) succeed. 00:14:16.131 [2024-12-05 13:45:15.701338] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18594b0/0x189db90) succeed. 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 Malloc0 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 [2024-12-05 13:45:15.789677] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1615557 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1615557 /var/tmp/bdevperf.sock 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1615557 ']' 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.131 13:45:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.131 [2024-12-05 13:45:15.837037] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:16.131 [2024-12-05 13:45:15.837080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615557 ] 00:14:16.131 [2024-12-05 13:45:15.907218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.131 [2024-12-05 13:45:15.928981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.388 13:45:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.388 13:45:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:14:16.388 13:45:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:16.388 13:45:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.388 13:45:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.388 NVMe0n1 00:14:16.388 13:45:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.388 13:45:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.388 Running I/O for 10 seconds... 00:14:18.692 18364.00 IOPS, 71.73 MiB/s [2024-12-05T12:45:19.479Z] 18432.00 IOPS, 72.00 MiB/s [2024-12-05T12:45:20.414Z] 18575.33 IOPS, 72.56 MiB/s [2024-12-05T12:45:21.348Z] 18631.50 IOPS, 72.78 MiB/s [2024-12-05T12:45:22.313Z] 18636.80 IOPS, 72.80 MiB/s [2024-12-05T12:45:23.246Z] 18650.50 IOPS, 72.85 MiB/s [2024-12-05T12:45:24.620Z] 18717.71 IOPS, 73.12 MiB/s [2024-12-05T12:45:25.561Z] 18688.00 IOPS, 73.00 MiB/s [2024-12-05T12:45:26.495Z] 18742.00 IOPS, 73.21 MiB/s [2024-12-05T12:45:26.495Z] 18739.20 IOPS, 73.20 MiB/s 00:14:26.642 Latency(us) 00:14:26.642 [2024-12-05T12:45:26.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.642 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:26.642 Verification LBA range: start 0x0 length 0x4000 00:14:26.642 NVMe0n1 : 10.04 18768.66 73.32 0.00 0.00 54431.24 20583.16 35535.08 00:14:26.642 [2024-12-05T12:45:26.496Z] =================================================================================================================== 00:14:26.643 [2024-12-05T12:45:26.496Z] Total : 18768.66 73.32 0.00 0.00 54431.24 20583.16 35535.08 00:14:26.643 { 00:14:26.643 "results": [ 00:14:26.643 { 00:14:26.643 "job": "NVMe0n1", 00:14:26.643 "core_mask": "0x1", 00:14:26.643 "workload": "verify", 00:14:26.643 "status": "finished", 00:14:26.643 "verify_range": { 00:14:26.643 "start": 0, 00:14:26.643 "length": 16384 00:14:26.643 }, 00:14:26.643 "queue_depth": 1024, 00:14:26.643 "io_size": 4096, 00:14:26.643 "runtime": 10.038865, 00:14:26.643 "iops": 18768.65561993313, 00:14:26.643 "mibps": 73.3150610153638, 00:14:26.643 "io_failed": 0, 00:14:26.643 "io_timeout": 0, 00:14:26.643 "avg_latency_us": 54431.236843800325, 00:14:26.643 "min_latency_us": 20583.158518518518, 00:14:26.643 "max_latency_us": 35535.07555555556 00:14:26.643 } 00:14:26.643 ], 00:14:26.643 "core_count": 1 00:14:26.643 } 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1615557 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1615557 ']' 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1615557 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615557 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615557' 00:14:26.643 killing process with pid 1615557 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1615557 00:14:26.643 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.643 00:14:26.643 Latency(us) 00:14:26.643 [2024-12-05T12:45:26.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.643 [2024-12-05T12:45:26.496Z] =================================================================================================================== 00:14:26.643 [2024-12-05T12:45:26.496Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1615557 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:26.643 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:26.902 rmmod nvme_rdma 00:14:26.902 rmmod nvme_fabrics 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1615301 ']' 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1615301 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1615301 ']' 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1615301 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615301 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615301' 00:14:26.902 killing process with pid 1615301 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1615301 00:14:26.902 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1615301 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:27.161 00:14:27.161 real 0m17.724s 00:14:27.161 user 0m23.838s 00:14:27.161 sys 0m5.268s 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:27.161 ************************************ 00:14:27.161 END TEST nvmf_queue_depth 00:14:27.161 ************************************ 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:27.161 ************************************ 00:14:27.161 START TEST nvmf_target_multipath 00:14:27.161 ************************************ 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:27.161 * Looking for test storage... 00:14:27.161 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:14:27.161 13:45:26 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.161 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.421 --rc genhtml_branch_coverage=1 00:14:27.421 --rc genhtml_function_coverage=1 00:14:27.421 --rc genhtml_legend=1 00:14:27.421 --rc geninfo_all_blocks=1 00:14:27.421 --rc geninfo_unexecuted_blocks=1 00:14:27.421 00:14:27.421 ' 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.421 --rc genhtml_branch_coverage=1 00:14:27.421 --rc genhtml_function_coverage=1 00:14:27.421 --rc genhtml_legend=1 00:14:27.421 --rc geninfo_all_blocks=1 00:14:27.421 --rc geninfo_unexecuted_blocks=1 00:14:27.421 00:14:27.421 ' 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.421 --rc genhtml_branch_coverage=1 00:14:27.421 --rc genhtml_function_coverage=1 00:14:27.421 --rc genhtml_legend=1 00:14:27.421 --rc geninfo_all_blocks=1 00:14:27.421 --rc geninfo_unexecuted_blocks=1 00:14:27.421 00:14:27.421 ' 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.421 --rc genhtml_branch_coverage=1 00:14:27.421 --rc genhtml_function_coverage=1 00:14:27.421 --rc genhtml_legend=1 00:14:27.421 --rc geninfo_all_blocks=1 00:14:27.421 --rc geninfo_unexecuted_blocks=1 00:14:27.421 00:14:27.421 ' 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.421 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.422 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:14:27.422 13:45:27 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:34.046 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:34.046 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:34.046 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:34.047 Found net devices under 0000:18:00.0: mlx_0_0 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:34.047 Found net devices under 0000:18:00.1: mlx_0_1 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:34.047 13:45:32 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:34.047 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:34.047 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:34.047 altname enp24s0f0np0 00:14:34.047 altname ens785f0np0 00:14:34.047 inet 192.168.100.8/24 scope global mlx_0_0 00:14:34.047 valid_lft forever preferred_lft forever 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:34.047 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:34.047 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:34.047 altname enp24s0f1np1 00:14:34.047 altname ens785f1np1 00:14:34.047 inet 192.168.100.9/24 scope global mlx_0_1 00:14:34.047 valid_lft forever preferred_lft forever 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:14:34.047 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:34.048 192.168.100.9' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:34.048 192.168.100.9' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:34.048 192.168.100.9' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:14:34.048 run this test only with TCP transport for now 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:34.048 rmmod nvme_rdma 00:14:34.048 rmmod nvme_fabrics 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:34.048 00:14:34.048 real 0m6.437s 00:14:34.048 user 0m1.846s 00:14:34.048 sys 0m4.727s 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:34.048 ************************************ 00:14:34.048 END TEST nvmf_target_multipath 00:14:34.048 ************************************ 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:34.048 ************************************ 00:14:34.048 START TEST nvmf_zcopy 00:14:34.048 ************************************ 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:34.048 * Looking for test storage... 00:14:34.048 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.048 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:34.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.049 --rc genhtml_branch_coverage=1 00:14:34.049 --rc genhtml_function_coverage=1 00:14:34.049 --rc genhtml_legend=1 00:14:34.049 --rc geninfo_all_blocks=1 00:14:34.049 --rc geninfo_unexecuted_blocks=1 00:14:34.049 00:14:34.049 ' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:34.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.049 --rc genhtml_branch_coverage=1 00:14:34.049 --rc genhtml_function_coverage=1 00:14:34.049 --rc genhtml_legend=1 00:14:34.049 --rc geninfo_all_blocks=1 00:14:34.049 --rc geninfo_unexecuted_blocks=1 00:14:34.049 00:14:34.049 ' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:34.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.049 --rc genhtml_branch_coverage=1 00:14:34.049 --rc genhtml_function_coverage=1 00:14:34.049 --rc genhtml_legend=1 00:14:34.049 --rc geninfo_all_blocks=1 00:14:34.049 --rc geninfo_unexecuted_blocks=1 00:14:34.049 00:14:34.049 ' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:34.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.049 --rc genhtml_branch_coverage=1 00:14:34.049 --rc genhtml_function_coverage=1 00:14:34.049 --rc genhtml_legend=1 00:14:34.049 --rc geninfo_all_blocks=1 00:14:34.049 --rc geninfo_unexecuted_blocks=1 00:14:34.049 00:14:34.049 ' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.049 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:14:34.049 13:45:33 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:40.620 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:40.620 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:40.620 Found net devices under 0000:18:00.0: mlx_0_0 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:40.620 Found net devices under 0000:18:00.1: mlx_0_1 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:40.620 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:40.621 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:40.621 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:40.621 altname enp24s0f0np0 00:14:40.621 altname ens785f0np0 00:14:40.621 inet 192.168.100.8/24 scope global mlx_0_0 00:14:40.621 valid_lft forever preferred_lft forever 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:40.621 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:40.621 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:40.621 altname enp24s0f1np1 00:14:40.621 altname ens785f1np1 00:14:40.621 inet 192.168.100.9/24 scope global mlx_0_1 00:14:40.621 valid_lft forever preferred_lft forever 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:40.621 192.168.100.9' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:40.621 192.168.100.9' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:40.621 192.168.100.9' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1624173 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1624173 00:14:40.621 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1624173 ']' 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:40.622 [2024-12-05 13:45:39.705702] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:40.622 [2024-12-05 13:45:39.705751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.622 [2024-12-05 13:45:39.780284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.622 [2024-12-05 13:45:39.800986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.622 [2024-12-05 13:45:39.801020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.622 [2024-12-05 13:45:39.801026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.622 [2024-12-05 13:45:39.801032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.622 [2024-12-05 13:45:39.801037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.622 [2024-12-05 13:45:39.801529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:14:40.622 Unsupported transport: rdma 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:40.622 nvmf_trace.0 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.622 13:45:39 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:40.622 rmmod nvme_rdma 00:14:40.622 rmmod nvme_fabrics 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1624173 ']' 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1624173 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1624173 ']' 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1624173 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1624173 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1624173' 00:14:40.622 killing process with pid 1624173 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1624173 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1624173 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:40.622 00:14:40.622 real 0m6.871s 00:14:40.622 user 0m2.525s 00:14:40.622 sys 0m4.872s 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:40.622 ************************************ 00:14:40.622 END TEST nvmf_zcopy 00:14:40.622 ************************************ 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.622 13:45:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:40.623 ************************************ 00:14:40.623 START TEST nvmf_nmic 00:14:40.623 ************************************ 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:14:40.623 * Looking for test storage... 00:14:40.623 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:40.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.623 --rc genhtml_branch_coverage=1 00:14:40.623 --rc genhtml_function_coverage=1 00:14:40.623 --rc genhtml_legend=1 00:14:40.623 --rc geninfo_all_blocks=1 00:14:40.623 --rc geninfo_unexecuted_blocks=1 00:14:40.623 00:14:40.623 ' 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:40.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.623 --rc genhtml_branch_coverage=1 00:14:40.623 --rc genhtml_function_coverage=1 00:14:40.623 --rc genhtml_legend=1 00:14:40.623 --rc geninfo_all_blocks=1 00:14:40.623 --rc geninfo_unexecuted_blocks=1 00:14:40.623 00:14:40.623 ' 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:40.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.623 --rc genhtml_branch_coverage=1 00:14:40.623 --rc genhtml_function_coverage=1 00:14:40.623 --rc genhtml_legend=1 00:14:40.623 --rc geninfo_all_blocks=1 00:14:40.623 --rc geninfo_unexecuted_blocks=1 00:14:40.623 00:14:40.623 ' 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:40.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.623 --rc genhtml_branch_coverage=1 00:14:40.623 --rc genhtml_function_coverage=1 00:14:40.623 --rc genhtml_legend=1 00:14:40.623 --rc geninfo_all_blocks=1 00:14:40.623 --rc geninfo_unexecuted_blocks=1 00:14:40.623 00:14:40.623 ' 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.623 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.882 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:14:40.882 13:45:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:47.456 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:47.456 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:47.456 Found net devices under 0000:18:00.0: mlx_0_0 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:47.456 Found net devices under 0000:18:00.1: mlx_0_1 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:47.456 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:47.457 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:47.457 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:47.457 altname enp24s0f0np0 00:14:47.457 altname ens785f0np0 00:14:47.457 inet 192.168.100.8/24 scope global mlx_0_0 00:14:47.457 valid_lft forever preferred_lft forever 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:47.457 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:47.457 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:47.457 altname enp24s0f1np1 00:14:47.457 altname ens785f1np1 00:14:47.457 inet 192.168.100.9/24 scope global mlx_0_1 00:14:47.457 valid_lft forever preferred_lft forever 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:47.457 192.168.100.9' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:47.457 192.168.100.9' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:47.457 192.168.100.9' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1627609 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1627609 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1627609 ']' 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.457 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.457 [2024-12-05 13:45:46.678508] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:47.457 [2024-12-05 13:45:46.678557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.457 [2024-12-05 13:45:46.754904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.457 [2024-12-05 13:45:46.778051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.458 [2024-12-05 13:45:46.778092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.458 [2024-12-05 13:45:46.778098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.458 [2024-12-05 13:45:46.778104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.458 [2024-12-05 13:45:46.778108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.458 [2024-12-05 13:45:46.779495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.458 [2024-12-05 13:45:46.779606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.458 [2024-12-05 13:45:46.779710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.458 [2024-12-05 13:45:46.779711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:46 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 [2024-12-05 13:45:46.933854] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf24f30/0xf29420) succeed. 00:14:47.458 [2024-12-05 13:45:46.942271] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf265c0/0xf6aac0) succeed. 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 Malloc0 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 [2024-12-05 13:45:47.108533] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:47.458 test case1: single bdev can't be used in multiple subsystems 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 [2024-12-05 13:45:47.132336] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:47.458 [2024-12-05 13:45:47.132353] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:47.458 [2024-12-05 13:45:47.132359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.458 request: 00:14:47.458 { 00:14:47.458 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:47.458 "namespace": { 00:14:47.458 "bdev_name": "Malloc0", 00:14:47.458 "no_auto_visible": false, 00:14:47.458 "hide_metadata": false 00:14:47.458 }, 00:14:47.458 "method": "nvmf_subsystem_add_ns", 00:14:47.458 "req_id": 1 00:14:47.458 } 00:14:47.458 Got JSON-RPC error response 00:14:47.458 response: 00:14:47.458 { 00:14:47.458 "code": -32602, 00:14:47.458 "message": "Invalid parameters" 00:14:47.458 } 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:47.458 Adding namespace failed - expected result. 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:47.458 test case2: host connect to nvmf target in multiple paths 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:47.458 [2024-12-05 13:45:47.144402] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.458 13:45:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:48.392 13:45:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:14:49.327 13:45:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:49.327 13:45:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:14:49.327 13:45:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.327 13:45:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:49.327 13:45:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:51.861 13:45:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:51.861 13:45:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:51.861 13:45:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.861 13:45:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:51.862 13:45:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.862 13:45:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:51.862 13:45:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:51.862 [global] 00:14:51.862 thread=1 00:14:51.862 invalidate=1 00:14:51.862 rw=write 00:14:51.862 time_based=1 00:14:51.862 runtime=1 00:14:51.862 ioengine=libaio 00:14:51.862 direct=1 00:14:51.862 bs=4096 00:14:51.862 iodepth=1 00:14:51.862 norandommap=0 00:14:51.862 numjobs=1 00:14:51.862 00:14:51.862 verify_dump=1 00:14:51.862 verify_backlog=512 00:14:51.862 verify_state_save=0 00:14:51.862 do_verify=1 00:14:51.862 verify=crc32c-intel 00:14:51.862 [job0] 00:14:51.862 filename=/dev/nvme0n1 00:14:51.862 Could not set queue depth (nvme0n1) 00:14:51.862 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:51.862 fio-3.35 00:14:51.862 Starting 1 thread 00:14:52.802 00:14:52.802 job0: (groupid=0, jobs=1): err= 0: pid=1628674: Thu Dec 5 13:45:52 2024 00:14:52.802 read: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec) 00:14:52.802 slat (nsec): min=6372, max=27995, avg=7323.51, stdev=747.09 00:14:52.802 clat (usec): min=35, max=243, avg=55.76, stdev= 4.97 00:14:52.802 lat (usec): min=54, max=253, avg=63.08, stdev= 5.07 00:14:52.802 clat percentiles (usec): 00:14:52.802 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:14:52.802 | 30.00th=[ 55], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:14:52.802 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 60], 95.00th=[ 62], 00:14:52.802 | 99.00th=[ 65], 99.50th=[ 68], 99.90th=[ 116], 99.95th=[ 161], 00:14:52.802 | 99.99th=[ 243] 00:14:52.802 write: IOPS=7725, BW=30.2MiB/s (31.6MB/s)(30.2MiB/1001msec); 0 zone resets 00:14:52.802 slat (nsec): min=8592, max=46144, avg=9495.56, stdev=955.93 00:14:52.802 clat (usec): min=37, max=105, avg=53.34, stdev= 3.32 00:14:52.802 lat (usec): min=53, max=151, avg=62.84, stdev= 3.51 00:14:52.802 clat percentiles (usec): 00:14:52.802 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 51], 00:14:52.802 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 53], 60.00th=[ 55], 00:14:52.802 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 58], 95.00th=[ 59], 00:14:52.802 | 99.00th=[ 62], 99.50th=[ 64], 99.90th=[ 71], 99.95th=[ 74], 00:14:52.802 | 99.99th=[ 106] 00:14:52.802 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:14:52.802 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:14:52.802 lat (usec) : 50=8.78%, 100=91.14%, 250=0.07% 00:14:52.802 cpu : usr=6.00%, sys=14.20%, ctx=15413, majf=0, minf=1 00:14:52.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:52.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.802 issued rwts: total=7680,7733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:52.802 00:14:52.802 Run status group 0 (all jobs): 00:14:52.802 READ: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:14:52.802 WRITE: bw=30.2MiB/s (31.6MB/s), 30.2MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=30.2MiB (31.7MB), run=1001-1001msec 00:14:52.802 00:14:52.802 Disk stats (read/write): 00:14:52.802 nvme0n1: ios=6756/7168, merge=0/0, ticks=341/356, in_queue=697, util=90.68% 00:14:52.802 13:45:52 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:55.339 rmmod nvme_rdma 00:14:55.339 rmmod nvme_fabrics 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1627609 ']' 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1627609 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1627609 ']' 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1627609 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1627609 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1627609' 00:14:55.339 killing process with pid 1627609 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1627609 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1627609 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:55.339 00:14:55.339 real 0m14.647s 00:14:55.339 user 0m42.524s 00:14:55.339 sys 0m5.442s 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:55.339 ************************************ 00:14:55.339 END TEST nvmf_nmic 00:14:55.339 ************************************ 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.339 13:45:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:55.339 ************************************ 00:14:55.339 START TEST nvmf_fio_target 00:14:55.340 ************************************ 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:55.340 * Looking for test storage... 00:14:55.340 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:55.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.340 --rc genhtml_branch_coverage=1 00:14:55.340 --rc genhtml_function_coverage=1 00:14:55.340 --rc genhtml_legend=1 00:14:55.340 --rc geninfo_all_blocks=1 00:14:55.340 --rc geninfo_unexecuted_blocks=1 00:14:55.340 00:14:55.340 ' 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:55.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.340 --rc genhtml_branch_coverage=1 00:14:55.340 --rc genhtml_function_coverage=1 00:14:55.340 --rc genhtml_legend=1 00:14:55.340 --rc geninfo_all_blocks=1 00:14:55.340 --rc geninfo_unexecuted_blocks=1 00:14:55.340 00:14:55.340 ' 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:55.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.340 --rc genhtml_branch_coverage=1 00:14:55.340 --rc genhtml_function_coverage=1 00:14:55.340 --rc genhtml_legend=1 00:14:55.340 --rc geninfo_all_blocks=1 00:14:55.340 --rc geninfo_unexecuted_blocks=1 00:14:55.340 00:14:55.340 ' 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:55.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.340 --rc genhtml_branch_coverage=1 00:14:55.340 --rc genhtml_function_coverage=1 00:14:55.340 --rc genhtml_legend=1 00:14:55.340 --rc geninfo_all_blocks=1 00:14:55.340 --rc geninfo_unexecuted_blocks=1 00:14:55.340 00:14:55.340 ' 00:14:55.340 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.600 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:55.600 13:45:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:02.179 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:02.179 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:02.179 Found net devices under 0000:18:00.0: mlx_0_0 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:02.179 Found net devices under 0000:18:00.1: mlx_0_1 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:02.179 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:02.180 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:02.180 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:02.180 altname enp24s0f0np0 00:15:02.180 altname ens785f0np0 00:15:02.180 inet 192.168.100.8/24 scope global mlx_0_0 00:15:02.180 valid_lft forever preferred_lft forever 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:02.180 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:02.180 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:02.180 altname enp24s0f1np1 00:15:02.180 altname ens785f1np1 00:15:02.180 inet 192.168.100.9/24 scope global mlx_0_1 00:15:02.180 valid_lft forever preferred_lft forever 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:02.180 192.168.100.9' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:02.180 192.168.100.9' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:02.180 192.168.100.9' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1632610 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1632610 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1632610 ']' 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.180 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.180 [2024-12-05 13:46:01.319868] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:15:02.180 [2024-12-05 13:46:01.319918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.180 [2024-12-05 13:46:01.398076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.180 [2024-12-05 13:46:01.420605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.180 [2024-12-05 13:46:01.420644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.180 [2024-12-05 13:46:01.420650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.180 [2024-12-05 13:46:01.420656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.181 [2024-12-05 13:46:01.420660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.181 [2024-12-05 13:46:01.422036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.181 [2024-12-05 13:46:01.422150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.181 [2024-12-05 13:46:01.422261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.181 [2024-12-05 13:46:01.422262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.181 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.181 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:15:02.181 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:02.181 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.181 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.181 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.181 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:02.181 [2024-12-05 13:46:01.722213] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15f8f30/0x15fd420) succeed. 00:15:02.181 [2024-12-05 13:46:01.730297] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15fa5c0/0x163eac0) succeed. 00:15:02.181 13:46:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.441 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:02.441 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.441 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:02.441 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.700 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:02.700 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.959 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:02.959 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:03.217 13:46:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:03.218 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:03.218 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:03.476 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:03.476 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:03.735 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:03.735 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:03.994 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:03.994 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:03.994 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.251 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:04.252 13:46:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.510 13:46:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:04.510 [2024-12-05 13:46:04.327411] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:04.510 13:46:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:04.767 13:46:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:05.026 13:46:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:05.958 13:46:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:05.958 13:46:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:15:05.958 13:46:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.958 13:46:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:15:05.958 13:46:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:15:05.958 13:46:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:15:07.860 13:46:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:07.860 13:46:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:07.860 13:46:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.860 13:46:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:15:07.860 13:46:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.860 13:46:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:15:07.860 13:46:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:08.119 [global] 00:15:08.119 thread=1 00:15:08.119 invalidate=1 00:15:08.119 rw=write 00:15:08.119 time_based=1 00:15:08.119 runtime=1 00:15:08.119 ioengine=libaio 00:15:08.119 direct=1 00:15:08.119 bs=4096 00:15:08.119 iodepth=1 00:15:08.119 norandommap=0 00:15:08.119 numjobs=1 00:15:08.119 00:15:08.119 verify_dump=1 00:15:08.119 verify_backlog=512 00:15:08.119 verify_state_save=0 00:15:08.119 do_verify=1 00:15:08.119 verify=crc32c-intel 00:15:08.119 [job0] 00:15:08.119 filename=/dev/nvme0n1 00:15:08.119 [job1] 00:15:08.119 filename=/dev/nvme0n2 00:15:08.119 [job2] 00:15:08.119 filename=/dev/nvme0n3 00:15:08.119 [job3] 00:15:08.119 filename=/dev/nvme0n4 00:15:08.119 Could not set queue depth (nvme0n1) 00:15:08.119 Could not set queue depth (nvme0n2) 00:15:08.119 Could not set queue depth (nvme0n3) 00:15:08.119 Could not set queue depth (nvme0n4) 00:15:08.378 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.378 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.378 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.378 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.378 fio-3.35 00:15:08.378 Starting 4 threads 00:15:09.755 00:15:09.755 job0: (groupid=0, jobs=1): err= 0: pid=1634001: Thu Dec 5 13:46:09 2024 00:15:09.755 read: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1001msec) 00:15:09.755 slat (nsec): min=6285, max=24637, avg=7253.50, stdev=767.99 00:15:09.755 clat (usec): min=63, max=379, avg=107.11, stdev=20.00 00:15:09.755 lat (usec): min=70, max=386, avg=114.36, stdev=20.00 00:15:09.755 clat percentiles (usec): 00:15:09.755 | 1.00th=[ 87], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 99], 00:15:09.755 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 106], 00:15:09.755 | 70.00th=[ 109], 80.00th=[ 112], 90.00th=[ 116], 95.00th=[ 120], 00:15:09.755 | 99.00th=[ 217], 99.50th=[ 255], 99.90th=[ 310], 99.95th=[ 367], 00:15:09.755 | 99.99th=[ 379] 00:15:09.755 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:15:09.755 slat (nsec): min=6923, max=48428, avg=9470.01, stdev=1168.99 00:15:09.755 clat (usec): min=59, max=342, avg=100.86, stdev=18.54 00:15:09.755 lat (usec): min=68, max=352, avg=110.33, stdev=18.58 00:15:09.755 clat percentiles (usec): 00:15:09.755 | 1.00th=[ 78], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 92], 00:15:09.755 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 98], 60.00th=[ 101], 00:15:09.755 | 70.00th=[ 103], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 120], 00:15:09.755 | 99.00th=[ 192], 99.50th=[ 239], 99.90th=[ 293], 99.95th=[ 302], 00:15:09.755 | 99.99th=[ 343] 00:15:09.755 bw ( KiB/s): min=18024, max=18024, per=25.91%, avg=18024.00, stdev= 0.00, samples=1 00:15:09.755 iops : min= 4506, max= 4506, avg=4506.00, stdev= 0.00, samples=1 00:15:09.755 lat (usec) : 100=42.17%, 250=57.38%, 500=0.45% 00:15:09.755 cpu : usr=3.80%, sys=7.80%, ctx=8758, majf=0, minf=1 00:15:09.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.755 issued rwts: total=4149,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.755 job1: (groupid=0, jobs=1): err= 0: pid=1634002: Thu Dec 5 13:46:09 2024 00:15:09.755 read: IOPS=4126, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1001msec) 00:15:09.755 slat (nsec): min=6284, max=19197, avg=7220.95, stdev=719.25 00:15:09.755 clat (usec): min=64, max=404, avg=107.29, stdev=20.16 00:15:09.755 lat (usec): min=72, max=411, avg=114.51, stdev=20.17 00:15:09.755 clat percentiles (usec): 00:15:09.755 | 1.00th=[ 88], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 99], 00:15:09.755 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 108], 00:15:09.755 | 70.00th=[ 110], 80.00th=[ 112], 90.00th=[ 116], 95.00th=[ 120], 00:15:09.755 | 99.00th=[ 221], 99.50th=[ 253], 99.90th=[ 355], 99.95th=[ 388], 00:15:09.755 | 99.99th=[ 404] 00:15:09.755 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:15:09.755 slat (nsec): min=8509, max=38806, avg=9539.09, stdev=1170.17 00:15:09.755 clat (usec): min=61, max=369, avg=101.10, stdev=18.98 00:15:09.755 lat (usec): min=71, max=378, avg=110.64, stdev=19.06 00:15:09.755 clat percentiles (usec): 00:15:09.755 | 1.00th=[ 80], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 93], 00:15:09.755 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 98], 60.00th=[ 100], 00:15:09.755 | 70.00th=[ 103], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 120], 00:15:09.755 | 99.00th=[ 200], 99.50th=[ 241], 99.90th=[ 310], 99.95th=[ 322], 00:15:09.755 | 99.99th=[ 371] 00:15:09.755 bw ( KiB/s): min=17896, max=17896, per=25.73%, avg=17896.00, stdev= 0.00, samples=1 00:15:09.755 iops : min= 4474, max= 4474, avg=4474.00, stdev= 0.00, samples=1 00:15:09.755 lat (usec) : 100=42.25%, 250=57.31%, 500=0.45% 00:15:09.755 cpu : usr=4.30%, sys=7.10%, ctx=8739, majf=0, minf=1 00:15:09.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.755 issued rwts: total=4131,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.755 job2: (groupid=0, jobs=1): err= 0: pid=1634003: Thu Dec 5 13:46:09 2024 00:15:09.755 read: IOPS=3873, BW=15.1MiB/s (15.9MB/s)(15.1MiB/1001msec) 00:15:09.755 slat (nsec): min=6321, max=28975, avg=7520.00, stdev=827.68 00:15:09.755 clat (usec): min=73, max=278, avg=119.61, stdev=10.81 00:15:09.755 lat (usec): min=81, max=285, avg=127.13, stdev=10.80 00:15:09.755 clat percentiles (usec): 00:15:09.755 | 1.00th=[ 85], 5.00th=[ 104], 10.00th=[ 111], 20.00th=[ 115], 00:15:09.755 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 120], 60.00th=[ 122], 00:15:09.755 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 130], 95.00th=[ 135], 00:15:09.755 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 198], 00:15:09.755 | 99.99th=[ 277] 00:15:09.755 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:15:09.755 slat (nsec): min=8711, max=36697, avg=9733.37, stdev=937.37 00:15:09.756 clat (usec): min=67, max=278, avg=110.23, stdev=11.09 00:15:09.756 lat (usec): min=77, max=294, avg=119.96, stdev=11.13 00:15:09.756 clat percentiles (usec): 00:15:09.756 | 1.00th=[ 77], 5.00th=[ 92], 10.00th=[ 101], 20.00th=[ 105], 00:15:09.756 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 113], 00:15:09.756 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 120], 95.00th=[ 124], 00:15:09.756 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 161], 00:15:09.756 | 99.99th=[ 281] 00:15:09.756 bw ( KiB/s): min=16384, max=16384, per=23.55%, avg=16384.00, stdev= 0.00, samples=1 00:15:09.756 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:15:09.756 lat (usec) : 100=6.63%, 250=93.34%, 500=0.03% 00:15:09.756 cpu : usr=3.50%, sys=7.30%, ctx=7973, majf=0, minf=1 00:15:09.756 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.756 issued rwts: total=3877,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.756 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.756 job3: (groupid=0, jobs=1): err= 0: pid=1634004: Thu Dec 5 13:46:09 2024 00:15:09.756 read: IOPS=3873, BW=15.1MiB/s (15.9MB/s)(15.1MiB/1001msec) 00:15:09.756 slat (nsec): min=6578, max=20150, avg=7434.27, stdev=694.03 00:15:09.756 clat (usec): min=75, max=294, avg=119.66, stdev=10.82 00:15:09.756 lat (usec): min=83, max=307, avg=127.09, stdev=10.82 00:15:09.756 clat percentiles (usec): 00:15:09.756 | 1.00th=[ 85], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 115], 00:15:09.756 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 122], 00:15:09.756 | 70.00th=[ 124], 80.00th=[ 126], 90.00th=[ 130], 95.00th=[ 133], 00:15:09.756 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 174], 99.95th=[ 194], 00:15:09.756 | 99.99th=[ 293] 00:15:09.756 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:15:09.756 slat (nsec): min=8713, max=38422, avg=9655.17, stdev=889.67 00:15:09.756 clat (usec): min=67, max=283, avg=110.34, stdev=10.90 00:15:09.756 lat (usec): min=77, max=292, avg=119.99, stdev=10.88 00:15:09.756 clat percentiles (usec): 00:15:09.756 | 1.00th=[ 77], 5.00th=[ 92], 10.00th=[ 101], 20.00th=[ 105], 00:15:09.756 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 113], 00:15:09.756 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 120], 95.00th=[ 124], 00:15:09.756 | 99.00th=[ 145], 99.50th=[ 149], 99.90th=[ 157], 99.95th=[ 161], 00:15:09.756 | 99.99th=[ 285] 00:15:09.756 bw ( KiB/s): min=16384, max=16384, per=23.55%, avg=16384.00, stdev= 0.00, samples=1 00:15:09.756 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:15:09.756 lat (usec) : 100=6.52%, 250=93.45%, 500=0.03% 00:15:09.756 cpu : usr=4.50%, sys=6.20%, ctx=7973, majf=0, minf=1 00:15:09.756 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.756 issued rwts: total=3877,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.756 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.756 00:15:09.756 Run status group 0 (all jobs): 00:15:09.756 READ: bw=62.6MiB/s (65.6MB/s), 15.1MiB/s-16.2MiB/s (15.9MB/s-17.0MB/s), io=62.6MiB (65.7MB), run=1001-1001msec 00:15:09.756 WRITE: bw=67.9MiB/s (71.2MB/s), 16.0MiB/s-18.0MiB/s (16.8MB/s-18.9MB/s), io=68.0MiB (71.3MB), run=1001-1001msec 00:15:09.756 00:15:09.756 Disk stats (read/write): 00:15:09.756 nvme0n1: ios=3634/3941, merge=0/0, ticks=385/386, in_queue=771, util=87.17% 00:15:09.756 nvme0n2: ios=3584/3926, merge=0/0, ticks=379/359, in_queue=738, util=87.53% 00:15:09.756 nvme0n3: ios=3286/3584, merge=0/0, ticks=385/383, in_queue=768, util=89.25% 00:15:09.756 nvme0n4: ios=3287/3584, merge=0/0, ticks=384/371, in_queue=755, util=89.80% 00:15:09.756 13:46:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:09.756 [global] 00:15:09.756 thread=1 00:15:09.756 invalidate=1 00:15:09.756 rw=randwrite 00:15:09.756 time_based=1 00:15:09.756 runtime=1 00:15:09.756 ioengine=libaio 00:15:09.756 direct=1 00:15:09.756 bs=4096 00:15:09.756 iodepth=1 00:15:09.756 norandommap=0 00:15:09.756 numjobs=1 00:15:09.756 00:15:09.756 verify_dump=1 00:15:09.756 verify_backlog=512 00:15:09.756 verify_state_save=0 00:15:09.756 do_verify=1 00:15:09.756 verify=crc32c-intel 00:15:09.756 [job0] 00:15:09.756 filename=/dev/nvme0n1 00:15:09.756 [job1] 00:15:09.756 filename=/dev/nvme0n2 00:15:09.756 [job2] 00:15:09.756 filename=/dev/nvme0n3 00:15:09.756 [job3] 00:15:09.756 filename=/dev/nvme0n4 00:15:09.756 Could not set queue depth (nvme0n1) 00:15:09.756 Could not set queue depth (nvme0n2) 00:15:09.756 Could not set queue depth (nvme0n3) 00:15:09.756 Could not set queue depth (nvme0n4) 00:15:10.015 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.015 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.015 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.015 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.015 fio-3.35 00:15:10.015 Starting 4 threads 00:15:11.395 00:15:11.395 job0: (groupid=0, jobs=1): err= 0: pid=1634427: Thu Dec 5 13:46:10 2024 00:15:11.395 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:15:11.395 slat (nsec): min=6461, max=24169, avg=7344.07, stdev=833.59 00:15:11.395 clat (usec): min=63, max=982, avg=126.05, stdev=29.52 00:15:11.395 lat (usec): min=70, max=989, avg=133.40, stdev=29.51 00:15:11.395 clat percentiles (usec): 00:15:11.395 | 1.00th=[ 73], 5.00th=[ 82], 10.00th=[ 88], 20.00th=[ 114], 00:15:11.395 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:15:11.395 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 161], 95.00th=[ 172], 00:15:11.395 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 416], 99.95th=[ 474], 00:15:11.395 | 99.99th=[ 979] 00:15:11.395 write: IOPS=4037, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1001msec); 0 zone resets 00:15:11.395 slat (nsec): min=8353, max=49864, avg=9149.94, stdev=1236.52 00:15:11.395 clat (usec): min=44, max=272, avg=116.21, stdev=24.51 00:15:11.395 lat (usec): min=67, max=280, avg=125.36, stdev=24.53 00:15:11.395 clat percentiles (usec): 00:15:11.395 | 1.00th=[ 65], 5.00th=[ 74], 10.00th=[ 79], 20.00th=[ 102], 00:15:11.395 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 117], 60.00th=[ 120], 00:15:11.395 | 70.00th=[ 124], 80.00th=[ 137], 90.00th=[ 151], 95.00th=[ 159], 00:15:11.395 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 186], 99.95th=[ 208], 00:15:11.395 | 99.99th=[ 273] 00:15:11.395 bw ( KiB/s): min=16384, max=16384, per=21.27%, avg=16384.00, stdev= 0.00, samples=1 00:15:11.395 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:15:11.395 lat (usec) : 50=0.01%, 100=16.38%, 250=83.50%, 500=0.09%, 1000=0.01% 00:15:11.395 cpu : usr=3.90%, sys=6.20%, ctx=7626, majf=0, minf=1 00:15:11.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:11.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.395 issued rwts: total=3584,4042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:11.395 job1: (groupid=0, jobs=1): err= 0: pid=1634428: Thu Dec 5 13:46:10 2024 00:15:11.395 read: IOPS=5612, BW=21.9MiB/s (23.0MB/s)(21.9MiB/1001msec) 00:15:11.395 slat (nsec): min=6391, max=26682, avg=7159.61, stdev=824.38 00:15:11.395 clat (usec): min=58, max=570, avg=79.80, stdev=21.67 00:15:11.395 lat (usec): min=69, max=577, avg=86.96, stdev=21.70 00:15:11.395 clat percentiles (usec): 00:15:11.395 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 73], 00:15:11.395 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 78], 00:15:11.396 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 93], 00:15:11.396 | 99.00th=[ 198], 99.50th=[ 223], 99.90th=[ 297], 99.95th=[ 351], 00:15:11.396 | 99.99th=[ 570] 00:15:11.396 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:15:11.396 slat (nsec): min=8298, max=35704, avg=9196.99, stdev=1045.03 00:15:11.396 clat (usec): min=51, max=448, avg=77.78, stdev=25.18 00:15:11.396 lat (usec): min=67, max=457, avg=86.97, stdev=25.24 00:15:11.396 clat percentiles (usec): 00:15:11.396 | 1.00th=[ 63], 5.00th=[ 65], 10.00th=[ 67], 20.00th=[ 69], 00:15:11.396 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:15:11.396 | 70.00th=[ 76], 80.00th=[ 78], 90.00th=[ 84], 95.00th=[ 133], 00:15:11.396 | 99.00th=[ 206], 99.50th=[ 227], 99.90th=[ 297], 99.95th=[ 351], 00:15:11.396 | 99.99th=[ 449] 00:15:11.396 bw ( KiB/s): min=23152, max=23152, per=30.05%, avg=23152.00, stdev= 0.00, samples=1 00:15:11.396 iops : min= 5788, max= 5788, avg=5788.00, stdev= 0.00, samples=1 00:15:11.396 lat (usec) : 100=94.83%, 250=4.87%, 500=0.29%, 750=0.01% 00:15:11.396 cpu : usr=5.00%, sys=9.50%, ctx=11250, majf=0, minf=1 00:15:11.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:11.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.396 issued rwts: total=5618,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:11.396 job2: (groupid=0, jobs=1): err= 0: pid=1634429: Thu Dec 5 13:46:10 2024 00:15:11.396 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:15:11.396 slat (nsec): min=6243, max=32026, avg=7554.64, stdev=1187.70 00:15:11.396 clat (usec): min=69, max=552, avg=126.04, stdev=21.21 00:15:11.396 lat (usec): min=76, max=566, avg=133.59, stdev=21.36 00:15:11.396 clat percentiles (usec): 00:15:11.396 | 1.00th=[ 82], 5.00th=[ 91], 10.00th=[ 106], 20.00th=[ 117], 00:15:11.396 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 129], 00:15:11.396 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 159], 00:15:11.396 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 330], 99.95th=[ 453], 00:15:11.396 | 99.99th=[ 553] 00:15:11.396 write: IOPS=4033, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1001msec); 0 zone resets 00:15:11.396 slat (nsec): min=8394, max=37736, avg=9363.94, stdev=1145.42 00:15:11.396 clat (usec): min=64, max=337, avg=116.06, stdev=17.93 00:15:11.396 lat (usec): min=73, max=346, avg=125.43, stdev=18.02 00:15:11.396 clat percentiles (usec): 00:15:11.396 | 1.00th=[ 72], 5.00th=[ 82], 10.00th=[ 96], 20.00th=[ 106], 00:15:11.396 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 119], 00:15:11.396 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 139], 95.00th=[ 147], 00:15:11.396 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 188], 99.95th=[ 190], 00:15:11.396 | 99.99th=[ 338] 00:15:11.396 bw ( KiB/s): min=16384, max=16384, per=21.27%, avg=16384.00, stdev= 0.00, samples=1 00:15:11.396 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:15:11.396 lat (usec) : 100=10.27%, 250=89.58%, 500=0.13%, 750=0.01% 00:15:11.396 cpu : usr=3.40%, sys=6.80%, ctx=7622, majf=0, minf=1 00:15:11.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:11.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.396 issued rwts: total=3584,4038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:11.396 job3: (groupid=0, jobs=1): err= 0: pid=1634430: Thu Dec 5 13:46:10 2024 00:15:11.396 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:15:11.396 slat (nsec): min=6606, max=18342, avg=7380.34, stdev=727.02 00:15:11.396 clat (usec): min=67, max=288, avg=86.15, stdev= 7.58 00:15:11.396 lat (usec): min=77, max=300, avg=93.53, stdev= 7.64 00:15:11.396 clat percentiles (usec): 00:15:11.396 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:15:11.396 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 87], 00:15:11.396 | 70.00th=[ 89], 80.00th=[ 91], 90.00th=[ 94], 95.00th=[ 98], 00:15:11.396 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 133], 99.95th=[ 219], 00:15:11.396 | 99.99th=[ 289] 00:15:11.396 write: IOPS=5562, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1001msec); 0 zone resets 00:15:11.396 slat (nsec): min=8605, max=37556, avg=9399.78, stdev=1016.96 00:15:11.396 clat (usec): min=65, max=475, avg=80.62, stdev=10.66 00:15:11.396 lat (usec): min=75, max=485, avg=90.02, stdev=10.73 00:15:11.396 clat percentiles (usec): 00:15:11.396 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 76], 00:15:11.396 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 81], 00:15:11.396 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 89], 95.00th=[ 93], 00:15:11.396 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 192], 99.95th=[ 367], 00:15:11.396 | 99.99th=[ 478] 00:15:11.396 bw ( KiB/s): min=22032, max=22032, per=28.60%, avg=22032.00, stdev= 0.00, samples=1 00:15:11.396 iops : min= 5508, max= 5508, avg=5508.00, stdev= 0.00, samples=1 00:15:11.396 lat (usec) : 100=98.19%, 250=1.76%, 500=0.05% 00:15:11.396 cpu : usr=4.60%, sys=9.30%, ctx=10688, majf=0, minf=1 00:15:11.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:11.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.396 issued rwts: total=5120,5568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:11.396 00:15:11.396 Run status group 0 (all jobs): 00:15:11.396 READ: bw=69.9MiB/s (73.3MB/s), 14.0MiB/s-21.9MiB/s (14.7MB/s-23.0MB/s), io=69.9MiB (73.3MB), run=1001-1001msec 00:15:11.396 WRITE: bw=75.2MiB/s (78.9MB/s), 15.8MiB/s-22.0MiB/s (16.5MB/s-23.0MB/s), io=75.3MiB (79.0MB), run=1001-1001msec 00:15:11.396 00:15:11.396 Disk stats (read/write): 00:15:11.396 nvme0n1: ios=3122/3271, merge=0/0, ticks=382/377, in_queue=759, util=84.47% 00:15:11.396 nvme0n2: ios=4608/4695, merge=0/0, ticks=343/351, in_queue=694, util=85.20% 00:15:11.396 nvme0n3: ios=3072/3267, merge=0/0, ticks=366/357, in_queue=723, util=88.36% 00:15:11.396 nvme0n4: ios=4273/4608, merge=0/0, ticks=360/356, in_queue=716, util=89.50% 00:15:11.396 13:46:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:11.396 [global] 00:15:11.396 thread=1 00:15:11.396 invalidate=1 00:15:11.396 rw=write 00:15:11.396 time_based=1 00:15:11.396 runtime=1 00:15:11.396 ioengine=libaio 00:15:11.396 direct=1 00:15:11.396 bs=4096 00:15:11.396 iodepth=128 00:15:11.396 norandommap=0 00:15:11.396 numjobs=1 00:15:11.396 00:15:11.396 verify_dump=1 00:15:11.396 verify_backlog=512 00:15:11.396 verify_state_save=0 00:15:11.396 do_verify=1 00:15:11.396 verify=crc32c-intel 00:15:11.396 [job0] 00:15:11.396 filename=/dev/nvme0n1 00:15:11.396 [job1] 00:15:11.396 filename=/dev/nvme0n2 00:15:11.396 [job2] 00:15:11.396 filename=/dev/nvme0n3 00:15:11.396 [job3] 00:15:11.396 filename=/dev/nvme0n4 00:15:11.396 Could not set queue depth (nvme0n1) 00:15:11.396 Could not set queue depth (nvme0n2) 00:15:11.396 Could not set queue depth (nvme0n3) 00:15:11.396 Could not set queue depth (nvme0n4) 00:15:11.396 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.396 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.396 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.396 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.396 fio-3.35 00:15:11.396 Starting 4 threads 00:15:12.777 00:15:12.777 job0: (groupid=0, jobs=1): err= 0: pid=1634858: Thu Dec 5 13:46:12 2024 00:15:12.777 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:15:12.777 slat (nsec): min=1306, max=6926.5k, avg=88778.56, stdev=436233.26 00:15:12.777 clat (usec): min=2360, max=24389, avg=11767.17, stdev=4670.77 00:15:12.777 lat (usec): min=2364, max=24394, avg=11855.95, stdev=4692.98 00:15:12.777 clat percentiles (usec): 00:15:12.777 | 1.00th=[ 3032], 5.00th=[ 4228], 10.00th=[ 5407], 20.00th=[ 7570], 00:15:12.777 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[11469], 60.00th=[13042], 00:15:12.777 | 70.00th=[14615], 80.00th=[15795], 90.00th=[18220], 95.00th=[19268], 00:15:12.777 | 99.00th=[22676], 99.50th=[22938], 99.90th=[24249], 99.95th=[24249], 00:15:12.777 | 99.99th=[24511] 00:15:12.777 write: IOPS=5769, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1003msec); 0 zone resets 00:15:12.777 slat (nsec): min=1842, max=4911.0k, avg=82132.02, stdev=384370.95 00:15:12.777 clat (usec): min=1168, max=22430, avg=10477.75, stdev=4333.14 00:15:12.777 lat (usec): min=2644, max=22437, avg=10559.88, stdev=4351.44 00:15:12.777 clat percentiles (usec): 00:15:12.777 | 1.00th=[ 2868], 5.00th=[ 3818], 10.00th=[ 4948], 20.00th=[ 5866], 00:15:12.777 | 30.00th=[ 6849], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11994], 00:15:12.777 | 70.00th=[13566], 80.00th=[14615], 90.00th=[15664], 95.00th=[17171], 00:15:12.777 | 99.00th=[19006], 99.50th=[20579], 99.90th=[21890], 99.95th=[22414], 00:15:12.777 | 99.99th=[22414] 00:15:12.777 bw ( KiB/s): min=21077, max=24152, per=21.61%, avg=22614.50, stdev=2174.35, samples=2 00:15:12.777 iops : min= 5269, max= 6038, avg=5653.50, stdev=543.77, samples=2 00:15:12.777 lat (msec) : 2=0.01%, 4=4.91%, 10=36.61%, 20=56.04%, 50=2.43% 00:15:12.777 cpu : usr=2.89%, sys=5.69%, ctx=1699, majf=0, minf=1 00:15:12.777 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:12.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.777 issued rwts: total=5632,5787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.777 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.777 job1: (groupid=0, jobs=1): err= 0: pid=1634865: Thu Dec 5 13:46:12 2024 00:15:12.777 read: IOPS=8574, BW=33.5MiB/s (35.1MB/s)(33.6MiB/1004msec) 00:15:12.777 slat (nsec): min=1251, max=5256.8k, avg=56043.58, stdev=273099.85 00:15:12.777 clat (usec): min=1334, max=22917, avg=7499.14, stdev=2999.24 00:15:12.777 lat (usec): min=3240, max=22929, avg=7555.18, stdev=3016.84 00:15:12.777 clat percentiles (usec): 00:15:12.777 | 1.00th=[ 4293], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 5735], 00:15:12.777 | 30.00th=[ 5997], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6587], 00:15:12.777 | 70.00th=[ 7046], 80.00th=[ 8586], 90.00th=[12256], 95.00th=[14484], 00:15:12.777 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:15:12.777 | 99.99th=[22938] 00:15:12.777 write: IOPS=8669, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1004msec); 0 zone resets 00:15:12.777 slat (nsec): min=1729, max=6376.5k, avg=55129.52, stdev=251200.11 00:15:12.777 clat (usec): min=2766, max=21275, avg=7161.24, stdev=2807.54 00:15:12.777 lat (usec): min=2774, max=21284, avg=7216.37, stdev=2827.56 00:15:12.777 clat percentiles (usec): 00:15:12.777 | 1.00th=[ 4113], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 5407], 00:15:12.777 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6194], 60.00th=[ 6390], 00:15:12.777 | 70.00th=[ 6980], 80.00th=[ 8848], 90.00th=[11207], 95.00th=[13566], 00:15:12.777 | 99.00th=[17957], 99.50th=[18220], 99.90th=[20841], 99.95th=[20841], 00:15:12.777 | 99.99th=[21365] 00:15:12.777 bw ( KiB/s): min=34040, max=35520, per=33.23%, avg=34780.00, stdev=1046.52, samples=2 00:15:12.777 iops : min= 8510, max= 8880, avg=8695.00, stdev=261.63, samples=2 00:15:12.777 lat (msec) : 2=0.01%, 4=0.77%, 10=84.51%, 20=14.53%, 50=0.18% 00:15:12.777 cpu : usr=4.59%, sys=7.68%, ctx=1705, majf=0, minf=1 00:15:12.777 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:12.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.778 issued rwts: total=8609,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.778 job2: (groupid=0, jobs=1): err= 0: pid=1634873: Thu Dec 5 13:46:12 2024 00:15:12.778 read: IOPS=4312, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1003msec) 00:15:12.778 slat (nsec): min=1317, max=7037.7k, avg=111409.63, stdev=476645.21 00:15:12.778 clat (usec): min=2207, max=25056, avg=14169.81, stdev=3817.77 00:15:12.778 lat (usec): min=2989, max=25080, avg=14281.22, stdev=3830.17 00:15:12.778 clat percentiles (usec): 00:15:12.778 | 1.00th=[ 5080], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[10552], 00:15:12.778 | 30.00th=[12256], 40.00th=[13829], 50.00th=[14615], 60.00th=[15008], 00:15:12.778 | 70.00th=[15926], 80.00th=[17433], 90.00th=[19268], 95.00th=[20055], 00:15:12.778 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24249], 99.95th=[24773], 00:15:12.778 | 99.99th=[25035] 00:15:12.778 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:15:12.778 slat (nsec): min=1887, max=6920.8k, avg=107846.74, stdev=490391.91 00:15:12.778 clat (usec): min=3285, max=26519, avg=14274.19, stdev=4183.48 00:15:12.778 lat (usec): min=3311, max=26529, avg=14382.04, stdev=4206.58 00:15:12.778 clat percentiles (usec): 00:15:12.778 | 1.00th=[ 5538], 5.00th=[ 6587], 10.00th=[ 7701], 20.00th=[ 9503], 00:15:12.778 | 30.00th=[12518], 40.00th=[14353], 50.00th=[15270], 60.00th=[16319], 00:15:12.778 | 70.00th=[17171], 80.00th=[17957], 90.00th=[18744], 95.00th=[19530], 00:15:12.778 | 99.00th=[21890], 99.50th=[22414], 99.90th=[24773], 99.95th=[24773], 00:15:12.778 | 99.99th=[26608] 00:15:12.778 bw ( KiB/s): min=17848, max=19016, per=17.61%, avg=18432.00, stdev=825.90, samples=2 00:15:12.778 iops : min= 4462, max= 4754, avg=4608.00, stdev=206.48, samples=2 00:15:12.778 lat (msec) : 4=0.04%, 10=17.93%, 20=77.89%, 50=4.13% 00:15:12.778 cpu : usr=2.89%, sys=4.39%, ctx=1391, majf=0, minf=1 00:15:12.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:12.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.778 issued rwts: total=4325,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.778 job3: (groupid=0, jobs=1): err= 0: pid=1634878: Thu Dec 5 13:46:12 2024 00:15:12.778 read: IOPS=7071, BW=27.6MiB/s (29.0MB/s)(27.7MiB/1002msec) 00:15:12.778 slat (nsec): min=1298, max=6115.8k, avg=64837.47, stdev=321071.21 00:15:12.778 clat (usec): min=381, max=20188, avg=8935.28, stdev=3895.49 00:15:12.778 lat (usec): min=384, max=21434, avg=9000.12, stdev=3921.69 00:15:12.778 clat percentiles (usec): 00:15:12.778 | 1.00th=[ 1713], 5.00th=[ 3490], 10.00th=[ 4883], 20.00th=[ 6259], 00:15:12.778 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 8094], 00:15:12.778 | 70.00th=[10159], 80.00th=[12649], 90.00th=[15139], 95.00th=[16581], 00:15:12.778 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20055], 99.95th=[20055], 00:15:12.778 | 99.99th=[20317] 00:15:12.778 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:15:12.778 slat (nsec): min=1838, max=5692.1k, avg=66080.97, stdev=312045.50 00:15:12.778 clat (usec): min=553, max=19958, avg=8872.37, stdev=3844.99 00:15:12.778 lat (usec): min=620, max=20493, avg=8938.46, stdev=3867.57 00:15:12.778 clat percentiles (usec): 00:15:12.778 | 1.00th=[ 1844], 5.00th=[ 3654], 10.00th=[ 4686], 20.00th=[ 5932], 00:15:12.778 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 8979], 00:15:12.778 | 70.00th=[10683], 80.00th=[12518], 90.00th=[14353], 95.00th=[16057], 00:15:12.778 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19792], 99.95th=[19792], 00:15:12.778 | 99.99th=[20055] 00:15:12.778 bw ( KiB/s): min=32768, max=32768, per=31.31%, avg=32768.00, stdev= 0.00, samples=1 00:15:12.778 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:15:12.778 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.22% 00:15:12.778 lat (msec) : 2=0.93%, 4=5.42%, 10=61.11%, 20=32.19%, 50=0.08% 00:15:12.778 cpu : usr=3.80%, sys=7.09%, ctx=1381, majf=0, minf=2 00:15:12.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:12.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.778 issued rwts: total=7086,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.778 00:15:12.778 Run status group 0 (all jobs): 00:15:12.778 READ: bw=99.8MiB/s (105MB/s), 16.8MiB/s-33.5MiB/s (17.7MB/s-35.1MB/s), io=100MiB (105MB), run=1002-1004msec 00:15:12.778 WRITE: bw=102MiB/s (107MB/s), 17.9MiB/s-33.9MiB/s (18.8MB/s-35.5MB/s), io=103MiB (108MB), run=1002-1004msec 00:15:12.778 00:15:12.778 Disk stats (read/write): 00:15:12.778 nvme0n1: ios=5126/5120, merge=0/0, ticks=20000/16733, in_queue=36733, util=85.57% 00:15:12.778 nvme0n2: ios=7168/7669, merge=0/0, ticks=14503/15104, in_queue=29607, util=86.42% 00:15:12.778 nvme0n3: ios=3590/4096, merge=0/0, ticks=14360/16521, in_queue=30881, util=89.13% 00:15:12.778 nvme0n4: ios=5930/6144, merge=0/0, ticks=28388/28420, in_queue=56808, util=89.27% 00:15:12.778 13:46:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:12.778 [global] 00:15:12.778 thread=1 00:15:12.778 invalidate=1 00:15:12.778 rw=randwrite 00:15:12.778 time_based=1 00:15:12.778 runtime=1 00:15:12.778 ioengine=libaio 00:15:12.778 direct=1 00:15:12.778 bs=4096 00:15:12.778 iodepth=128 00:15:12.778 norandommap=0 00:15:12.778 numjobs=1 00:15:12.778 00:15:12.778 verify_dump=1 00:15:12.778 verify_backlog=512 00:15:12.778 verify_state_save=0 00:15:12.778 do_verify=1 00:15:12.778 verify=crc32c-intel 00:15:12.778 [job0] 00:15:12.778 filename=/dev/nvme0n1 00:15:12.778 [job1] 00:15:12.778 filename=/dev/nvme0n2 00:15:12.778 [job2] 00:15:12.778 filename=/dev/nvme0n3 00:15:12.778 [job3] 00:15:12.778 filename=/dev/nvme0n4 00:15:12.778 Could not set queue depth (nvme0n1) 00:15:12.778 Could not set queue depth (nvme0n2) 00:15:12.778 Could not set queue depth (nvme0n3) 00:15:12.778 Could not set queue depth (nvme0n4) 00:15:13.038 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:13.038 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:13.038 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:13.038 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:13.038 fio-3.35 00:15:13.038 Starting 4 threads 00:15:14.438 00:15:14.438 job0: (groupid=0, jobs=1): err= 0: pid=1635325: Thu Dec 5 13:46:13 2024 00:15:14.438 read: IOPS=5396, BW=21.1MiB/s (22.1MB/s)(21.1MiB/1003msec) 00:15:14.438 slat (nsec): min=1233, max=4379.4k, avg=89365.82, stdev=328029.89 00:15:14.438 clat (usec): min=1427, max=23353, avg=11360.53, stdev=5240.20 00:15:14.438 lat (usec): min=2611, max=23357, avg=11449.89, stdev=5273.60 00:15:14.438 clat percentiles (usec): 00:15:14.438 | 1.00th=[ 3949], 5.00th=[ 5080], 10.00th=[ 5866], 20.00th=[ 6325], 00:15:14.438 | 30.00th=[ 6521], 40.00th=[ 7308], 50.00th=[10290], 60.00th=[13042], 00:15:14.438 | 70.00th=[16188], 80.00th=[16909], 90.00th=[18744], 95.00th=[19792], 00:15:14.438 | 99.00th=[21103], 99.50th=[21365], 99.90th=[21890], 99.95th=[22414], 00:15:14.438 | 99.99th=[23462] 00:15:14.438 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:15:14.438 slat (nsec): min=1727, max=4831.9k, avg=87222.52, stdev=347981.03 00:15:14.438 clat (usec): min=2467, max=24494, avg=11655.62, stdev=5372.87 00:15:14.438 lat (usec): min=2472, max=24496, avg=11742.85, stdev=5406.03 00:15:14.438 clat percentiles (usec): 00:15:14.438 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 5735], 20.00th=[ 5997], 00:15:14.438 | 30.00th=[ 6456], 40.00th=[ 8094], 50.00th=[10552], 60.00th=[14353], 00:15:14.438 | 70.00th=[16450], 80.00th=[17695], 90.00th=[18744], 95.00th=[19268], 00:15:14.438 | 99.00th=[21103], 99.50th=[22414], 99.90th=[24511], 99.95th=[24511], 00:15:14.438 | 99.99th=[24511] 00:15:14.438 bw ( KiB/s): min=20439, max=24576, per=22.11%, avg=22507.50, stdev=2925.30, samples=2 00:15:14.438 iops : min= 5109, max= 6144, avg=5626.50, stdev=731.86, samples=2 00:15:14.438 lat (msec) : 2=0.01%, 4=0.65%, 10=48.01%, 20=48.87%, 50=2.45% 00:15:14.438 cpu : usr=3.69%, sys=4.99%, ctx=1980, majf=0, minf=1 00:15:14.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:14.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.438 issued rwts: total=5413,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.438 job1: (groupid=0, jobs=1): err= 0: pid=1635340: Thu Dec 5 13:46:13 2024 00:15:14.438 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:15:14.438 slat (nsec): min=1317, max=5044.7k, avg=72904.95, stdev=334183.54 00:15:14.438 clat (usec): min=2694, max=19617, avg=9781.58, stdev=3965.63 00:15:14.438 lat (usec): min=2697, max=20004, avg=9854.49, stdev=3990.35 00:15:14.438 clat percentiles (usec): 00:15:14.438 | 1.00th=[ 3949], 5.00th=[ 4883], 10.00th=[ 5211], 20.00th=[ 5997], 00:15:14.438 | 30.00th=[ 6718], 40.00th=[ 7635], 50.00th=[ 9110], 60.00th=[10290], 00:15:14.438 | 70.00th=[11994], 80.00th=[14222], 90.00th=[16188], 95.00th=[16909], 00:15:14.438 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19006], 99.95th=[19530], 00:15:14.438 | 99.99th=[19530] 00:15:14.438 write: IOPS=6969, BW=27.2MiB/s (28.5MB/s)(27.3MiB/1002msec); 0 zone resets 00:15:14.438 slat (nsec): min=1823, max=4385.8k, avg=69504.08, stdev=310566.79 00:15:14.438 clat (usec): min=1639, max=20661, avg=8842.94, stdev=4081.53 00:15:14.438 lat (usec): min=1973, max=20976, avg=8912.45, stdev=4104.43 00:15:14.438 clat percentiles (usec): 00:15:14.438 | 1.00th=[ 2933], 5.00th=[ 4080], 10.00th=[ 4621], 20.00th=[ 5211], 00:15:14.438 | 30.00th=[ 5932], 40.00th=[ 6521], 50.00th=[ 7570], 60.00th=[ 8848], 00:15:14.438 | 70.00th=[10683], 80.00th=[12649], 90.00th=[15401], 95.00th=[16712], 00:15:14.438 | 99.00th=[19006], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:15:14.438 | 99.99th=[20579] 00:15:14.438 bw ( KiB/s): min=24568, max=30280, per=26.94%, avg=27424.00, stdev=4038.99, samples=2 00:15:14.438 iops : min= 6142, max= 7570, avg=6856.00, stdev=1009.75, samples=2 00:15:14.438 lat (msec) : 2=0.04%, 4=2.97%, 10=59.51%, 20=37.22%, 50=0.26% 00:15:14.438 cpu : usr=4.30%, sys=5.59%, ctx=1956, majf=0, minf=1 00:15:14.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:14.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.438 issued rwts: total=6656,6983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.438 job2: (groupid=0, jobs=1): err= 0: pid=1635360: Thu Dec 5 13:46:13 2024 00:15:14.438 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:15:14.438 slat (nsec): min=1320, max=4478.4k, avg=74824.46, stdev=314679.80 00:15:14.438 clat (usec): min=2937, max=21076, avg=9772.21, stdev=4174.42 00:15:14.438 lat (usec): min=3222, max=22259, avg=9847.03, stdev=4201.04 00:15:14.438 clat percentiles (usec): 00:15:14.438 | 1.00th=[ 4555], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6325], 00:15:14.438 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7963], 60.00th=[ 9241], 00:15:14.438 | 70.00th=[11338], 80.00th=[13829], 90.00th=[16909], 95.00th=[18482], 00:15:14.438 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:15:14.438 | 99.99th=[21103] 00:15:14.438 write: IOPS=6747, BW=26.4MiB/s (27.6MB/s)(26.4MiB/1003msec); 0 zone resets 00:15:14.438 slat (nsec): min=1713, max=5190.6k, avg=69780.07, stdev=303574.59 00:15:14.438 clat (usec): min=2072, max=22155, avg=9152.53, stdev=4250.04 00:15:14.438 lat (usec): min=2942, max=22973, avg=9222.31, stdev=4272.65 00:15:14.438 clat percentiles (usec): 00:15:14.438 | 1.00th=[ 3884], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5866], 00:15:14.438 | 30.00th=[ 6259], 40.00th=[ 6980], 50.00th=[ 7373], 60.00th=[ 8160], 00:15:14.438 | 70.00th=[10290], 80.00th=[13304], 90.00th=[16581], 95.00th=[17695], 00:15:14.438 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21365], 99.95th=[22152], 00:15:14.438 | 99.99th=[22152] 00:15:14.438 bw ( KiB/s): min=23256, max=29992, per=26.15%, avg=26624.00, stdev=4763.07, samples=2 00:15:14.438 iops : min= 5814, max= 7498, avg=6656.00, stdev=1190.77, samples=2 00:15:14.438 lat (msec) : 4=0.80%, 10=65.49%, 20=32.81%, 50=0.89% 00:15:14.438 cpu : usr=4.29%, sys=5.39%, ctx=1689, majf=0, minf=1 00:15:14.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:14.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.438 issued rwts: total=6656,6768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.438 job3: (groupid=0, jobs=1): err= 0: pid=1635365: Thu Dec 5 13:46:13 2024 00:15:14.438 read: IOPS=5705, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1003msec) 00:15:14.438 slat (nsec): min=1349, max=5208.6k, avg=84129.12, stdev=370440.34 00:15:14.438 clat (usec): min=1941, max=20472, avg=10914.49, stdev=4170.76 00:15:14.438 lat (usec): min=3885, max=20927, avg=10998.62, stdev=4191.72 00:15:14.438 clat percentiles (usec): 00:15:14.438 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5997], 20.00th=[ 6849], 00:15:14.438 | 30.00th=[ 7832], 40.00th=[ 8979], 50.00th=[10159], 60.00th=[11863], 00:15:14.438 | 70.00th=[13435], 80.00th=[15139], 90.00th=[17171], 95.00th=[17957], 00:15:14.438 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:15:14.438 | 99.99th=[20579] 00:15:14.438 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:15:14.438 slat (nsec): min=1826, max=4487.7k, avg=80123.01, stdev=363873.13 00:15:14.438 clat (usec): min=3474, max=18664, avg=10512.00, stdev=3808.95 00:15:14.438 lat (usec): min=3480, max=19179, avg=10592.13, stdev=3827.91 00:15:14.438 clat percentiles (usec): 00:15:14.438 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6390], 00:15:14.438 | 30.00th=[ 7046], 40.00th=[ 8455], 50.00th=[10290], 60.00th=[11863], 00:15:14.438 | 70.00th=[13304], 80.00th=[14484], 90.00th=[16188], 95.00th=[16712], 00:15:14.438 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:15:14.438 | 99.99th=[18744] 00:15:14.438 bw ( KiB/s): min=20192, max=28672, per=24.00%, avg=24432.00, stdev=5996.27, samples=2 00:15:14.438 iops : min= 5048, max= 7168, avg=6108.00, stdev=1499.07, samples=2 00:15:14.438 lat (msec) : 2=0.01%, 4=0.21%, 10=47.85%, 20=51.86%, 50=0.08% 00:15:14.438 cpu : usr=3.39%, sys=5.69%, ctx=1706, majf=0, minf=1 00:15:14.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:14.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:14.438 issued rwts: total=5723,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:14.438 00:15:14.439 Run status group 0 (all jobs): 00:15:14.439 READ: bw=95.2MiB/s (99.8MB/s), 21.1MiB/s-25.9MiB/s (22.1MB/s-27.2MB/s), io=95.5MiB (100MB), run=1002-1003msec 00:15:14.439 WRITE: bw=99.4MiB/s (104MB/s), 21.9MiB/s-27.2MiB/s (23.0MB/s-28.5MB/s), io=99.7MiB (105MB), run=1002-1003msec 00:15:14.439 00:15:14.439 Disk stats (read/write): 00:15:14.439 nvme0n1: ios=4056/4096, merge=0/0, ticks=14588/14886, in_queue=29474, util=86.47% 00:15:14.439 nvme0n2: ios=5828/6144, merge=0/0, ticks=17166/15030, in_queue=32196, util=86.32% 00:15:14.439 nvme0n3: ios=5632/5794, merge=0/0, ticks=15556/15531, in_queue=31087, util=89.04% 00:15:14.439 nvme0n4: ios=5120/5566, merge=0/0, ticks=16633/16473, in_queue=33106, util=88.87% 00:15:14.439 13:46:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:14.439 13:46:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:14.439 13:46:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1635540 00:15:14.439 13:46:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:14.439 [global] 00:15:14.439 thread=1 00:15:14.439 invalidate=1 00:15:14.439 rw=read 00:15:14.439 time_based=1 00:15:14.439 runtime=10 00:15:14.439 ioengine=libaio 00:15:14.439 direct=1 00:15:14.439 bs=4096 00:15:14.439 iodepth=1 00:15:14.439 norandommap=1 00:15:14.439 numjobs=1 00:15:14.439 00:15:14.439 [job0] 00:15:14.439 filename=/dev/nvme0n1 00:15:14.439 [job1] 00:15:14.439 filename=/dev/nvme0n2 00:15:14.439 [job2] 00:15:14.439 filename=/dev/nvme0n3 00:15:14.439 [job3] 00:15:14.439 filename=/dev/nvme0n4 00:15:14.439 Could not set queue depth (nvme0n1) 00:15:14.439 Could not set queue depth (nvme0n2) 00:15:14.439 Could not set queue depth (nvme0n3) 00:15:14.439 Could not set queue depth (nvme0n4) 00:15:14.702 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.702 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.702 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.702 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:14.702 fio-3.35 00:15:14.702 Starting 4 threads 00:15:17.231 13:46:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:17.491 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=85082112, buflen=4096 00:15:17.491 fio: pid=1635828, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:17.491 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:17.491 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=82169856, buflen=4096 00:15:17.491 fio: pid=1635824, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:17.749 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:17.749 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:17.749 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=679936, buflen=4096 00:15:17.749 fio: pid=1635794, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:17.749 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:17.749 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:18.008 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=36962304, buflen=4096 00:15:18.008 fio: pid=1635807, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:18.008 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:18.008 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:18.008 00:15:18.008 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1635794: Thu Dec 5 13:46:17 2024 00:15:18.008 read: IOPS=10.7k, BW=41.8MiB/s (43.8MB/s)(129MiB/3077msec) 00:15:18.008 slat (usec): min=3, max=9899, avg= 7.96, stdev=93.12 00:15:18.008 clat (usec): min=48, max=26179, avg=84.22, stdev=145.34 00:15:18.008 lat (usec): min=56, max=26186, avg=92.18, stdev=172.69 00:15:18.008 clat percentiles (usec): 00:15:18.008 | 1.00th=[ 61], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:15:18.008 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:15:18.008 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 135], 00:15:18.008 | 99.00th=[ 182], 99.50th=[ 210], 99.90th=[ 262], 99.95th=[ 334], 00:15:18.008 | 99.99th=[ 396] 00:15:18.008 bw ( KiB/s): min=43848, max=45432, per=36.93%, avg=44632.00, stdev=678.54, samples=5 00:15:18.008 iops : min=10962, max=11358, avg=11158.00, stdev=169.63, samples=5 00:15:18.008 lat (usec) : 50=0.02%, 100=92.38%, 250=7.48%, 500=0.12% 00:15:18.008 lat (msec) : 50=0.01% 00:15:18.008 cpu : usr=2.67%, sys=8.78%, ctx=32941, majf=0, minf=2 00:15:18.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.008 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.008 issued rwts: total=32935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.008 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1635807: Thu Dec 5 13:46:17 2024 00:15:18.008 read: IOPS=7741, BW=30.2MiB/s (31.7MB/s)(99.2MiB/3282msec) 00:15:18.008 slat (usec): min=5, max=16885, avg= 9.47, stdev=170.64 00:15:18.008 clat (usec): min=32, max=325, avg=118.33, stdev=35.95 00:15:18.008 lat (usec): min=52, max=16953, avg=127.81, stdev=173.92 00:15:18.008 clat percentiles (usec): 00:15:18.008 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 57], 20.00th=[ 77], 00:15:18.008 | 30.00th=[ 102], 40.00th=[ 121], 50.00th=[ 129], 60.00th=[ 137], 00:15:18.008 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 159], 00:15:18.008 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 208], 99.95th=[ 210], 00:15:18.008 | 99.99th=[ 302] 00:15:18.008 bw ( KiB/s): min=26072, max=36425, per=24.09%, avg=29117.50, stdev=4067.54, samples=6 00:15:18.008 iops : min= 6518, max= 9106, avg=7279.33, stdev=1016.79, samples=6 00:15:18.008 lat (usec) : 50=1.83%, 100=27.04%, 250=71.12%, 500=0.01% 00:15:18.008 cpu : usr=1.65%, sys=6.86%, ctx=25415, majf=0, minf=2 00:15:18.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.008 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.008 issued rwts: total=25409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.008 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1635824: Thu Dec 5 13:46:17 2024 00:15:18.008 read: IOPS=6963, BW=27.2MiB/s (28.5MB/s)(78.4MiB/2881msec) 00:15:18.008 slat (usec): min=6, max=8848, avg= 8.30, stdev=87.74 00:15:18.008 clat (usec): min=59, max=370, avg=133.84, stdev=25.58 00:15:18.008 lat (usec): min=67, max=8957, avg=142.14, stdev=91.31 00:15:18.008 clat percentiles (usec): 00:15:18.008 | 1.00th=[ 76], 5.00th=[ 81], 10.00th=[ 93], 20.00th=[ 118], 00:15:18.008 | 30.00th=[ 124], 40.00th=[ 130], 50.00th=[ 137], 60.00th=[ 145], 00:15:18.008 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 169], 00:15:18.008 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 215], 00:15:18.008 | 99.99th=[ 334] 00:15:18.008 bw ( KiB/s): min=25304, max=30496, per=22.47%, avg=27160.00, stdev=2536.62, samples=5 00:15:18.008 iops : min= 6326, max= 7624, avg=6790.00, stdev=634.16, samples=5 00:15:18.008 lat (usec) : 100=11.29%, 250=88.68%, 500=0.02% 00:15:18.008 cpu : usr=1.35%, sys=6.46%, ctx=20064, majf=0, minf=2 00:15:18.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.008 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.008 issued rwts: total=20062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.008 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1635828: Thu Dec 5 13:46:17 2024 00:15:18.008 read: IOPS=7679, BW=30.0MiB/s (31.5MB/s)(81.1MiB/2705msec) 00:15:18.008 slat (nsec): min=6047, max=43831, avg=7306.18, stdev=817.57 00:15:18.008 clat (usec): min=69, max=556, avg=120.67, stdev=34.25 00:15:18.008 lat (usec): min=76, max=564, avg=127.97, stdev=34.40 00:15:18.008 clat percentiles (usec): 00:15:18.008 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 86], 00:15:18.008 | 30.00th=[ 89], 40.00th=[ 93], 50.00th=[ 133], 60.00th=[ 143], 00:15:18.008 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 169], 00:15:18.008 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 215], 00:15:18.008 | 99.99th=[ 330] 00:15:18.008 bw ( KiB/s): min=25304, max=42176, per=25.90%, avg=31305.60, stdev=8285.75, samples=5 00:15:18.008 iops : min= 6326, max=10544, avg=7826.40, stdev=2071.44, samples=5 00:15:18.008 lat (usec) : 100=45.48%, 250=54.49%, 500=0.02%, 750=0.01% 00:15:18.008 cpu : usr=1.70%, sys=6.92%, ctx=20774, majf=0, minf=1 00:15:18.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.008 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.008 issued rwts: total=20773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.008 00:15:18.008 Run status group 0 (all jobs): 00:15:18.008 READ: bw=118MiB/s (124MB/s), 27.2MiB/s-41.8MiB/s (28.5MB/s-43.8MB/s), io=387MiB (406MB), run=2705-3282msec 00:15:18.008 00:15:18.008 Disk stats (read/write): 00:15:18.008 nvme0n1: ios=30635/0, merge=0/0, ticks=2481/0, in_queue=2481, util=95.33% 00:15:18.008 nvme0n2: ios=22848/0, merge=0/0, ticks=2765/0, in_queue=2765, util=94.47% 00:15:18.008 nvme0n3: ios=19983/0, merge=0/0, ticks=2626/0, in_queue=2626, util=96.06% 00:15:18.008 nvme0n4: ios=20329/0, merge=0/0, ticks=2338/0, in_queue=2338, util=96.50% 00:15:18.267 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:18.267 13:46:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:18.525 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:18.526 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:18.526 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:18.526 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:18.784 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:18.784 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:19.043 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:19.043 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1635540 00:15:19.043 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:19.043 13:46:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:19.978 nvmf hotplug test: fio failed as expected 00:15:19.978 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:20.237 rmmod nvme_rdma 00:15:20.237 rmmod nvme_fabrics 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1632610 ']' 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1632610 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1632610 ']' 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1632610 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.237 13:46:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1632610 00:15:20.237 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.237 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.237 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1632610' 00:15:20.237 killing process with pid 1632610 00:15:20.237 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1632610 00:15:20.237 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1632610 00:15:20.496 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:20.497 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:20.497 00:15:20.497 real 0m25.221s 00:15:20.497 user 2m1.559s 00:15:20.497 sys 0m9.071s 00:15:20.497 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.497 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.497 ************************************ 00:15:20.497 END TEST nvmf_fio_target 00:15:20.497 ************************************ 00:15:20.497 13:46:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:15:20.497 13:46:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:20.497 13:46:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.497 13:46:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:20.497 ************************************ 00:15:20.497 START TEST nvmf_bdevio 00:15:20.497 ************************************ 00:15:20.497 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:15:20.757 * Looking for test storage... 00:15:20.757 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:20.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.757 --rc genhtml_branch_coverage=1 00:15:20.757 --rc genhtml_function_coverage=1 00:15:20.757 --rc genhtml_legend=1 00:15:20.757 --rc geninfo_all_blocks=1 00:15:20.757 --rc geninfo_unexecuted_blocks=1 00:15:20.757 00:15:20.757 ' 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:20.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.757 --rc genhtml_branch_coverage=1 00:15:20.757 --rc genhtml_function_coverage=1 00:15:20.757 --rc genhtml_legend=1 00:15:20.757 --rc geninfo_all_blocks=1 00:15:20.757 --rc geninfo_unexecuted_blocks=1 00:15:20.757 00:15:20.757 ' 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:20.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.757 --rc genhtml_branch_coverage=1 00:15:20.757 --rc genhtml_function_coverage=1 00:15:20.757 --rc genhtml_legend=1 00:15:20.757 --rc geninfo_all_blocks=1 00:15:20.757 --rc geninfo_unexecuted_blocks=1 00:15:20.757 00:15:20.757 ' 00:15:20.757 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:20.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.758 --rc genhtml_branch_coverage=1 00:15:20.758 --rc genhtml_function_coverage=1 00:15:20.758 --rc genhtml_legend=1 00:15:20.758 --rc geninfo_all_blocks=1 00:15:20.758 --rc geninfo_unexecuted_blocks=1 00:15:20.758 00:15:20.758 ' 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:20.758 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:15:20.758 13:46:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:27.323 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:27.323 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:27.323 Found net devices under 0000:18:00.0: mlx_0_0 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.323 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:27.324 Found net devices under 0000:18:00.1: mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:27.324 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.324 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:27.324 altname enp24s0f0np0 00:15:27.324 altname ens785f0np0 00:15:27.324 inet 192.168.100.8/24 scope global mlx_0_0 00:15:27.324 valid_lft forever preferred_lft forever 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:27.324 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.324 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:27.324 altname enp24s0f1np1 00:15:27.324 altname ens785f1np1 00:15:27.324 inet 192.168.100.9/24 scope global mlx_0_1 00:15:27.324 valid_lft forever preferred_lft forever 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:27.324 192.168.100.9' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:27.324 192.168.100.9' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:27.324 192.168.100.9' 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:27.324 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1640281 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1640281 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1640281 ']' 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 [2024-12-05 13:46:26.709668] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:15:27.325 [2024-12-05 13:46:26.709717] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.325 [2024-12-05 13:46:26.783235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.325 [2024-12-05 13:46:26.805446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.325 [2024-12-05 13:46:26.805486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.325 [2024-12-05 13:46:26.805493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.325 [2024-12-05 13:46:26.805498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.325 [2024-12-05 13:46:26.805502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.325 [2024-12-05 13:46:26.806815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:27.325 [2024-12-05 13:46:26.806934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:27.325 [2024-12-05 13:46:26.807020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.325 [2024-12-05 13:46:26.807022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 13:46:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 [2024-12-05 13:46:26.964073] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x82d830/0x831d20) succeed. 00:15:27.325 [2024-12-05 13:46:26.972501] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x82eec0/0x8733c0) succeed. 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 Malloc0 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 [2024-12-05 13:46:27.138714] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:27.325 { 00:15:27.325 "params": { 00:15:27.325 "name": "Nvme$subsystem", 00:15:27.325 "trtype": "$TEST_TRANSPORT", 00:15:27.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.325 "adrfam": "ipv4", 00:15:27.325 "trsvcid": "$NVMF_PORT", 00:15:27.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.325 "hdgst": ${hdgst:-false}, 00:15:27.325 "ddgst": ${ddgst:-false} 00:15:27.325 }, 00:15:27.325 "method": "bdev_nvme_attach_controller" 00:15:27.325 } 00:15:27.325 EOF 00:15:27.325 )") 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:15:27.325 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:27.325 "params": { 00:15:27.325 "name": "Nvme1", 00:15:27.325 "trtype": "rdma", 00:15:27.325 "traddr": "192.168.100.8", 00:15:27.325 "adrfam": "ipv4", 00:15:27.325 "trsvcid": "4420", 00:15:27.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.325 "hdgst": false, 00:15:27.325 "ddgst": false 00:15:27.325 }, 00:15:27.325 "method": "bdev_nvme_attach_controller" 00:15:27.325 }' 00:15:27.584 [2024-12-05 13:46:27.189149] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:15:27.584 [2024-12-05 13:46:27.189195] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640316 ] 00:15:27.584 [2024-12-05 13:46:27.263567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.584 [2024-12-05 13:46:27.287250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.585 [2024-12-05 13:46:27.287359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.585 [2024-12-05 13:46:27.287360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.844 I/O targets: 00:15:27.844 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:27.844 00:15:27.844 00:15:27.844 CUnit - A unit testing framework for C - Version 2.1-3 00:15:27.844 http://cunit.sourceforge.net/ 00:15:27.844 00:15:27.844 00:15:27.844 Suite: bdevio tests on: Nvme1n1 00:15:27.844 Test: blockdev write read block ...passed 00:15:27.844 Test: blockdev write zeroes read block ...passed 00:15:27.844 Test: blockdev write zeroes read no split ...passed 00:15:27.844 Test: blockdev write zeroes read split ...passed 00:15:27.844 Test: blockdev write zeroes read split partial ...passed 00:15:27.844 Test: blockdev reset ...[2024-12-05 13:46:27.487593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:27.844 [2024-12-05 13:46:27.509445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:27.844 [2024-12-05 13:46:27.537245] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:27.844 passed 00:15:27.844 Test: blockdev write read 8 blocks ...passed 00:15:27.844 Test: blockdev write read size > 128k ...passed 00:15:27.844 Test: blockdev write read invalid size ...passed 00:15:27.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.844 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.844 Test: blockdev write read max offset ...passed 00:15:27.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.844 Test: blockdev writev readv 8 blocks ...passed 00:15:27.844 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.844 Test: blockdev writev readv block ...passed 00:15:27.844 Test: blockdev writev readv size > 128k ...passed 00:15:27.844 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.844 Test: blockdev comparev and writev ...[2024-12-05 13:46:27.540353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.844 [2024-12-05 13:46:27.540383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.540393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.844 [2024-12-05 13:46:27.540400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.540555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.844 [2024-12-05 13:46:27.540564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.540571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.844 [2024-12-05 13:46:27.540577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.540730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.844 [2024-12-05 13:46:27.540738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.540746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.844 [2024-12-05 13:46:27.540752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.540903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.844 [2024-12-05 13:46:27.540912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.540919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.844 [2024-12-05 13:46:27.540924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:27.844 passed 00:15:27.844 Test: blockdev nvme passthru rw ...passed 00:15:27.844 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:46:27.541193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:27.844 [2024-12-05 13:46:27.541206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.541241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:27.844 [2024-12-05 13:46:27.541248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.541287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:27.844 [2024-12-05 13:46:27.541294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:27.844 [2024-12-05 13:46:27.541333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:27.844 [2024-12-05 13:46:27.541340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:27.844 passed 00:15:27.844 Test: blockdev nvme admin passthru ...passed 00:15:27.844 Test: blockdev copy ...passed 00:15:27.844 00:15:27.844 Run Summary: Type Total Ran Passed Failed Inactive 00:15:27.844 suites 1 1 n/a 0 0 00:15:27.844 tests 23 23 23 0 0 00:15:27.844 asserts 152 152 152 0 n/a 00:15:27.844 00:15:27.844 Elapsed time = 0.171 seconds 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:28.104 rmmod nvme_rdma 00:15:28.104 rmmod nvme_fabrics 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1640281 ']' 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1640281 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1640281 ']' 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1640281 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1640281 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1640281' 00:15:28.104 killing process with pid 1640281 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1640281 00:15:28.104 13:46:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1640281 00:15:28.365 13:46:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:28.365 13:46:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:28.365 00:15:28.365 real 0m7.755s 00:15:28.365 user 0m7.722s 00:15:28.365 sys 0m5.138s 00:15:28.365 13:46:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.365 13:46:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:28.365 ************************************ 00:15:28.365 END TEST nvmf_bdevio 00:15:28.365 ************************************ 00:15:28.365 13:46:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:28.365 00:15:28.365 real 3m54.190s 00:15:28.365 user 10m27.043s 00:15:28.365 sys 1m23.457s 00:15:28.365 13:46:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.365 13:46:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:28.365 ************************************ 00:15:28.365 END TEST nvmf_target_core 00:15:28.365 ************************************ 00:15:28.365 13:46:28 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:15:28.365 13:46:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.365 13:46:28 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.365 13:46:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:28.365 ************************************ 00:15:28.365 START TEST nvmf_target_extra 00:15:28.365 ************************************ 00:15:28.365 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:15:28.624 * Looking for test storage... 00:15:28.624 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:28.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.624 --rc genhtml_branch_coverage=1 00:15:28.624 --rc genhtml_function_coverage=1 00:15:28.624 --rc genhtml_legend=1 00:15:28.624 --rc geninfo_all_blocks=1 00:15:28.624 --rc geninfo_unexecuted_blocks=1 00:15:28.624 00:15:28.624 ' 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:28.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.624 --rc genhtml_branch_coverage=1 00:15:28.624 --rc genhtml_function_coverage=1 00:15:28.624 --rc genhtml_legend=1 00:15:28.624 --rc geninfo_all_blocks=1 00:15:28.624 --rc geninfo_unexecuted_blocks=1 00:15:28.624 00:15:28.624 ' 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:28.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.624 --rc genhtml_branch_coverage=1 00:15:28.624 --rc genhtml_function_coverage=1 00:15:28.624 --rc genhtml_legend=1 00:15:28.624 --rc geninfo_all_blocks=1 00:15:28.624 --rc geninfo_unexecuted_blocks=1 00:15:28.624 00:15:28.624 ' 00:15:28.624 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:28.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.624 --rc genhtml_branch_coverage=1 00:15:28.625 --rc genhtml_function_coverage=1 00:15:28.625 --rc genhtml_legend=1 00:15:28.625 --rc geninfo_all_blocks=1 00:15:28.625 --rc geninfo_unexecuted_blocks=1 00:15:28.625 00:15:28.625 ' 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.625 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.625 ************************************ 00:15:28.625 START TEST nvmf_example 00:15:28.625 ************************************ 00:15:28.625 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:15:28.625 * Looking for test storage... 00:15:28.884 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:28.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.884 --rc genhtml_branch_coverage=1 00:15:28.884 --rc genhtml_function_coverage=1 00:15:28.884 --rc genhtml_legend=1 00:15:28.884 --rc geninfo_all_blocks=1 00:15:28.884 --rc geninfo_unexecuted_blocks=1 00:15:28.884 00:15:28.884 ' 00:15:28.884 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:28.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.884 --rc genhtml_branch_coverage=1 00:15:28.884 --rc genhtml_function_coverage=1 00:15:28.884 --rc genhtml_legend=1 00:15:28.884 --rc geninfo_all_blocks=1 00:15:28.884 --rc geninfo_unexecuted_blocks=1 00:15:28.885 00:15:28.885 ' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:28.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.885 --rc genhtml_branch_coverage=1 00:15:28.885 --rc genhtml_function_coverage=1 00:15:28.885 --rc genhtml_legend=1 00:15:28.885 --rc geninfo_all_blocks=1 00:15:28.885 --rc geninfo_unexecuted_blocks=1 00:15:28.885 00:15:28.885 ' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:28.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.885 --rc genhtml_branch_coverage=1 00:15:28.885 --rc genhtml_function_coverage=1 00:15:28.885 --rc genhtml_legend=1 00:15:28.885 --rc geninfo_all_blocks=1 00:15:28.885 --rc geninfo_unexecuted_blocks=1 00:15:28.885 00:15:28.885 ' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.885 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:15:28.885 13:46:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:35.459 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:35.460 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:35.460 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:35.460 Found net devices under 0000:18:00.0: mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:35.460 Found net devices under 0000:18:00.1: mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:35.460 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:35.460 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:35.460 altname enp24s0f0np0 00:15:35.460 altname ens785f0np0 00:15:35.460 inet 192.168.100.8/24 scope global mlx_0_0 00:15:35.460 valid_lft forever preferred_lft forever 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:35.460 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:35.460 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:35.460 altname enp24s0f1np1 00:15:35.460 altname ens785f1np1 00:15:35.460 inet 192.168.100.9/24 scope global mlx_0_1 00:15:35.460 valid_lft forever preferred_lft forever 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:35.460 192.168.100.9' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:35.460 192.168.100.9' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:35.460 192.168.100.9' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1643893 00:15:35.460 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:35.461 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:35.461 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1643893 00:15:35.461 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1643893 ']' 00:15:35.461 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.461 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.461 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.461 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.461 13:46:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.719 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.719 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:15:35.719 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:35.719 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.719 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:35.979 13:46:35 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:48.290 Initializing NVMe Controllers 00:15:48.290 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.290 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:48.290 Initialization complete. Launching workers. 00:15:48.290 ======================================================== 00:15:48.290 Latency(us) 00:15:48.290 Device Information : IOPS MiB/s Average min max 00:15:48.290 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26693.03 104.27 2397.37 565.00 15834.95 00:15:48.290 ======================================================== 00:15:48.290 Total : 26693.03 104.27 2397.37 565.00 15834.95 00:15:48.290 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:48.290 rmmod nvme_rdma 00:15:48.290 rmmod nvme_fabrics 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1643893 ']' 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1643893 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1643893 ']' 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1643893 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1643893 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1643893' 00:15:48.290 killing process with pid 1643893 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1643893 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1643893 00:15:48.290 nvmf threads initialize successfully 00:15:48.290 bdev subsystem init successfully 00:15:48.290 created a nvmf target service 00:15:48.290 create targets's poll groups done 00:15:48.290 all subsystems of target started 00:15:48.290 nvmf target is running 00:15:48.290 all subsystems of target stopped 00:15:48.290 destroy targets's poll groups done 00:15:48.290 destroyed the nvmf target service 00:15:48.290 bdev subsystem finish successfully 00:15:48.290 nvmf threads destroy successfully 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:48.290 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:48.291 00:15:48.291 real 0m18.990s 00:15:48.291 user 0m51.761s 00:15:48.291 sys 0m5.087s 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:48.291 ************************************ 00:15:48.291 END TEST nvmf_example 00:15:48.291 ************************************ 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.291 ************************************ 00:15:48.291 START TEST nvmf_filesystem 00:15:48.291 ************************************ 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:15:48.291 * Looking for test storage... 00:15:48.291 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:48.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.291 --rc genhtml_branch_coverage=1 00:15:48.291 --rc genhtml_function_coverage=1 00:15:48.291 --rc genhtml_legend=1 00:15:48.291 --rc geninfo_all_blocks=1 00:15:48.291 --rc geninfo_unexecuted_blocks=1 00:15:48.291 00:15:48.291 ' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:48.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.291 --rc genhtml_branch_coverage=1 00:15:48.291 --rc genhtml_function_coverage=1 00:15:48.291 --rc genhtml_legend=1 00:15:48.291 --rc geninfo_all_blocks=1 00:15:48.291 --rc geninfo_unexecuted_blocks=1 00:15:48.291 00:15:48.291 ' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:48.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.291 --rc genhtml_branch_coverage=1 00:15:48.291 --rc genhtml_function_coverage=1 00:15:48.291 --rc genhtml_legend=1 00:15:48.291 --rc geninfo_all_blocks=1 00:15:48.291 --rc geninfo_unexecuted_blocks=1 00:15:48.291 00:15:48.291 ' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:48.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.291 --rc genhtml_branch_coverage=1 00:15:48.291 --rc genhtml_function_coverage=1 00:15:48.291 --rc genhtml_legend=1 00:15:48.291 --rc geninfo_all_blocks=1 00:15:48.291 --rc geninfo_unexecuted_blocks=1 00:15:48.291 00:15:48.291 ' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:48.291 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:48.292 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:48.293 #define SPDK_CONFIG_H 00:15:48.293 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:48.293 #define SPDK_CONFIG_APPS 1 00:15:48.293 #define SPDK_CONFIG_ARCH native 00:15:48.293 #undef SPDK_CONFIG_ASAN 00:15:48.293 #undef SPDK_CONFIG_AVAHI 00:15:48.293 #undef SPDK_CONFIG_CET 00:15:48.293 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:48.293 #define SPDK_CONFIG_COVERAGE 1 00:15:48.293 #define SPDK_CONFIG_CROSS_PREFIX 00:15:48.293 #undef SPDK_CONFIG_CRYPTO 00:15:48.293 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:48.293 #undef SPDK_CONFIG_CUSTOMOCF 00:15:48.293 #undef SPDK_CONFIG_DAOS 00:15:48.293 #define SPDK_CONFIG_DAOS_DIR 00:15:48.293 #define SPDK_CONFIG_DEBUG 1 00:15:48.293 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:48.293 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:15:48.293 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:15:48.293 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:15:48.293 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:48.293 #undef SPDK_CONFIG_DPDK_UADK 00:15:48.293 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:15:48.293 #define SPDK_CONFIG_EXAMPLES 1 00:15:48.293 #undef SPDK_CONFIG_FC 00:15:48.293 #define SPDK_CONFIG_FC_PATH 00:15:48.293 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:48.293 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:48.293 #define SPDK_CONFIG_FSDEV 1 00:15:48.293 #undef SPDK_CONFIG_FUSE 00:15:48.293 #undef SPDK_CONFIG_FUZZER 00:15:48.293 #define SPDK_CONFIG_FUZZER_LIB 00:15:48.293 #undef SPDK_CONFIG_GOLANG 00:15:48.293 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:48.293 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:48.293 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:48.293 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:48.293 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:48.293 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:48.293 #undef SPDK_CONFIG_HAVE_LZ4 00:15:48.293 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:48.293 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:48.293 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:48.293 #define SPDK_CONFIG_IDXD 1 00:15:48.293 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:48.293 #undef SPDK_CONFIG_IPSEC_MB 00:15:48.293 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:48.293 #define SPDK_CONFIG_ISAL 1 00:15:48.293 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:48.293 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:48.293 #define SPDK_CONFIG_LIBDIR 00:15:48.293 #undef SPDK_CONFIG_LTO 00:15:48.293 #define SPDK_CONFIG_MAX_LCORES 128 00:15:48.293 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:48.293 #define SPDK_CONFIG_NVME_CUSE 1 00:15:48.293 #undef SPDK_CONFIG_OCF 00:15:48.293 #define SPDK_CONFIG_OCF_PATH 00:15:48.293 #define SPDK_CONFIG_OPENSSL_PATH 00:15:48.293 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:48.293 #define SPDK_CONFIG_PGO_DIR 00:15:48.293 #undef SPDK_CONFIG_PGO_USE 00:15:48.293 #define SPDK_CONFIG_PREFIX /usr/local 00:15:48.293 #undef SPDK_CONFIG_RAID5F 00:15:48.293 #undef SPDK_CONFIG_RBD 00:15:48.293 #define SPDK_CONFIG_RDMA 1 00:15:48.293 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:48.293 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:48.293 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:48.293 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:48.293 #define SPDK_CONFIG_SHARED 1 00:15:48.293 #undef SPDK_CONFIG_SMA 00:15:48.293 #define SPDK_CONFIG_TESTS 1 00:15:48.293 #undef SPDK_CONFIG_TSAN 00:15:48.293 #define SPDK_CONFIG_UBLK 1 00:15:48.293 #define SPDK_CONFIG_UBSAN 1 00:15:48.293 #undef SPDK_CONFIG_UNIT_TESTS 00:15:48.293 #undef SPDK_CONFIG_URING 00:15:48.293 #define SPDK_CONFIG_URING_PATH 00:15:48.293 #undef SPDK_CONFIG_URING_ZNS 00:15:48.293 #undef SPDK_CONFIG_USDT 00:15:48.293 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:48.293 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:48.293 #undef SPDK_CONFIG_VFIO_USER 00:15:48.293 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:48.293 #define SPDK_CONFIG_VHOST 1 00:15:48.293 #define SPDK_CONFIG_VIRTIO 1 00:15:48.293 #undef SPDK_CONFIG_VTUNE 00:15:48.293 #define SPDK_CONFIG_VTUNE_DIR 00:15:48.293 #define SPDK_CONFIG_WERROR 1 00:15:48.293 #define SPDK_CONFIG_WPDK_DIR 00:15:48.293 #undef SPDK_CONFIG_XNVME 00:15:48.293 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.293 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:48.294 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:48.295 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:48.296 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1646339 ]] 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1646339 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.4Sq54t 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4Sq54t/tests/target /tmp/spdk.4Sq54t 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=71227035648 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=78631636992 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7404601344 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39302356992 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39315816448 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=13459456 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=15703232512 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=15726329856 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23097344 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39315443712 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39315820544 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=376832 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=7863148544 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=7863160832 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:48.297 * Looking for test storage... 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:48.297 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=71227035648 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9619193856 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:48.298 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:48.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.298 --rc genhtml_branch_coverage=1 00:15:48.298 --rc genhtml_function_coverage=1 00:15:48.298 --rc genhtml_legend=1 00:15:48.298 --rc geninfo_all_blocks=1 00:15:48.298 --rc geninfo_unexecuted_blocks=1 00:15:48.298 00:15:48.298 ' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:48.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.298 --rc genhtml_branch_coverage=1 00:15:48.298 --rc genhtml_function_coverage=1 00:15:48.298 --rc genhtml_legend=1 00:15:48.298 --rc geninfo_all_blocks=1 00:15:48.298 --rc geninfo_unexecuted_blocks=1 00:15:48.298 00:15:48.298 ' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:48.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.298 --rc genhtml_branch_coverage=1 00:15:48.298 --rc genhtml_function_coverage=1 00:15:48.298 --rc genhtml_legend=1 00:15:48.298 --rc geninfo_all_blocks=1 00:15:48.298 --rc geninfo_unexecuted_blocks=1 00:15:48.298 00:15:48.298 ' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:48.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.298 --rc genhtml_branch_coverage=1 00:15:48.298 --rc genhtml_function_coverage=1 00:15:48.298 --rc genhtml_legend=1 00:15:48.298 --rc geninfo_all_blocks=1 00:15:48.298 --rc geninfo_unexecuted_blocks=1 00:15:48.298 00:15:48.298 ' 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.298 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.299 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:48.299 13:46:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:54.873 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:54.873 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:54.873 Found net devices under 0000:18:00.0: mlx_0_0 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:54.873 Found net devices under 0000:18:00.1: mlx_0_1 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:54.873 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:54.873 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:54.873 altname enp24s0f0np0 00:15:54.873 altname ens785f0np0 00:15:54.873 inet 192.168.100.8/24 scope global mlx_0_0 00:15:54.873 valid_lft forever preferred_lft forever 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:54.873 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:54.873 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:54.873 altname enp24s0f1np1 00:15:54.873 altname ens785f1np1 00:15:54.873 inet 192.168.100.9/24 scope global mlx_0_1 00:15:54.873 valid_lft forever preferred_lft forever 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:54.873 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:54.874 192.168.100.9' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:54.874 192.168.100.9' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:54.874 192.168.100.9' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.874 13:46:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 ************************************ 00:15:54.874 START TEST nvmf_filesystem_no_in_capsule 00:15:54.874 ************************************ 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1649647 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1649647 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1649647 ']' 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 [2024-12-05 13:46:54.078446] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:15:54.874 [2024-12-05 13:46:54.078492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.874 [2024-12-05 13:46:54.155133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.874 [2024-12-05 13:46:54.178434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.874 [2024-12-05 13:46:54.178469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.874 [2024-12-05 13:46:54.178475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.874 [2024-12-05 13:46:54.178480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.874 [2024-12-05 13:46:54.178485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.874 [2024-12-05 13:46:54.179768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.874 [2024-12-05 13:46:54.179876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.874 [2024-12-05 13:46:54.180003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.874 [2024-12-05 13:46:54.180003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 [2024-12-05 13:46:54.307868] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:15:54.874 [2024-12-05 13:46:54.326316] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1475f30/0x147a420) succeed. 00:15:54.874 [2024-12-05 13:46:54.334459] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14775c0/0x14bbac0) succeed. 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 Malloc1 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 [2024-12-05 13:46:54.576869] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:54.874 { 00:15:54.874 "name": "Malloc1", 00:15:54.874 "aliases": [ 00:15:54.874 "fdb8828d-9a07-4e2b-bdd6-b90e2596707d" 00:15:54.874 ], 00:15:54.874 "product_name": "Malloc disk", 00:15:54.874 "block_size": 512, 00:15:54.874 "num_blocks": 1048576, 00:15:54.874 "uuid": "fdb8828d-9a07-4e2b-bdd6-b90e2596707d", 00:15:54.874 "assigned_rate_limits": { 00:15:54.874 "rw_ios_per_sec": 0, 00:15:54.874 "rw_mbytes_per_sec": 0, 00:15:54.874 "r_mbytes_per_sec": 0, 00:15:54.874 "w_mbytes_per_sec": 0 00:15:54.874 }, 00:15:54.874 "claimed": true, 00:15:54.874 "claim_type": "exclusive_write", 00:15:54.874 "zoned": false, 00:15:54.874 "supported_io_types": { 00:15:54.874 "read": true, 00:15:54.874 "write": true, 00:15:54.874 "unmap": true, 00:15:54.874 "flush": true, 00:15:54.874 "reset": true, 00:15:54.874 "nvme_admin": false, 00:15:54.874 "nvme_io": false, 00:15:54.874 "nvme_io_md": false, 00:15:54.874 "write_zeroes": true, 00:15:54.874 "zcopy": true, 00:15:54.874 "get_zone_info": false, 00:15:54.874 "zone_management": false, 00:15:54.874 "zone_append": false, 00:15:54.874 "compare": false, 00:15:54.874 "compare_and_write": false, 00:15:54.874 "abort": true, 00:15:54.874 "seek_hole": false, 00:15:54.874 "seek_data": false, 00:15:54.874 "copy": true, 00:15:54.874 "nvme_iov_md": false 00:15:54.874 }, 00:15:54.874 "memory_domains": [ 00:15:54.874 { 00:15:54.874 "dma_device_id": "system", 00:15:54.874 "dma_device_type": 1 00:15:54.874 }, 00:15:54.874 { 00:15:54.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.874 "dma_device_type": 2 00:15:54.874 } 00:15:54.874 ], 00:15:54.874 "driver_specific": {} 00:15:54.874 } 00:15:54.874 ]' 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:54.874 13:46:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:56.251 13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:56.251 13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:56.251 13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.251 13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:56.251 13:46:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:58.148 13:46:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.085 ************************************ 00:15:59.085 START TEST filesystem_ext4 00:15:59.085 ************************************ 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:59.085 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:59.085 mke2fs 1.47.0 (5-Feb-2023) 00:15:59.345 Discarding device blocks: 0/522240 done 00:15:59.345 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:59.345 Filesystem UUID: 7d6bac17-c2d8-4d44-a77d-4ee06055c5dc 00:15:59.345 Superblock backups stored on blocks: 00:15:59.345 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:59.345 00:15:59.345 Allocating group tables: 0/64 done 00:15:59.345 Writing inode tables: 0/64 done 00:15:59.345 Creating journal (8192 blocks): done 00:15:59.345 Writing superblocks and filesystem accounting information: 0/64 done 00:15:59.345 00:15:59.345 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:59.345 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:59.345 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:59.345 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:59.345 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:59.345 13:46:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1649647 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:59.345 00:15:59.345 real 0m0.177s 00:15:59.345 user 0m0.026s 00:15:59.345 sys 0m0.060s 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:59.345 ************************************ 00:15:59.345 END TEST filesystem_ext4 00:15:59.345 ************************************ 00:15:59.345 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.346 ************************************ 00:15:59.346 START TEST filesystem_btrfs 00:15:59.346 ************************************ 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:15:59.346 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:59.605 btrfs-progs v6.8.1 00:15:59.605 See https://btrfs.readthedocs.io for more information. 00:15:59.605 00:15:59.605 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:59.605 NOTE: several default settings have changed in version 5.15, please make sure 00:15:59.605 this does not affect your deployments: 00:15:59.605 - DUP for metadata (-m dup) 00:15:59.605 - enabled no-holes (-O no-holes) 00:15:59.605 - enabled free-space-tree (-R free-space-tree) 00:15:59.605 00:15:59.605 Label: (null) 00:15:59.605 UUID: 2430e6be-e5c0-4e6e-8bb8-659bc9d9b301 00:15:59.605 Node size: 16384 00:15:59.605 Sector size: 4096 (CPU page size: 4096) 00:15:59.605 Filesystem size: 510.00MiB 00:15:59.605 Block group profiles: 00:15:59.605 Data: single 8.00MiB 00:15:59.605 Metadata: DUP 32.00MiB 00:15:59.605 System: DUP 8.00MiB 00:15:59.605 SSD detected: yes 00:15:59.605 Zoned device: no 00:15:59.605 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:59.605 Checksum: crc32c 00:15:59.605 Number of devices: 1 00:15:59.605 Devices: 00:15:59.605 ID SIZE PATH 00:15:59.605 1 510.00MiB /dev/nvme0n1p1 00:15:59.605 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1649647 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:59.605 00:15:59.605 real 0m0.241s 00:15:59.605 user 0m0.019s 00:15:59.605 sys 0m0.116s 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:59.605 ************************************ 00:15:59.605 END TEST filesystem_btrfs 00:15:59.605 ************************************ 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:59.605 ************************************ 00:15:59.605 START TEST filesystem_xfs 00:15:59.605 ************************************ 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:59.605 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:59.606 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:15:59.606 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:59.606 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:15:59.606 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:15:59.606 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:15:59.606 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:15:59.606 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:59.865 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:59.865 = sectsz=512 attr=2, projid32bit=1 00:15:59.865 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:59.865 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:59.865 data = bsize=4096 blocks=130560, imaxpct=25 00:15:59.865 = sunit=0 swidth=0 blks 00:15:59.865 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:59.865 log =internal log bsize=4096 blocks=16384, version=2 00:15:59.865 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:59.865 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:59.865 Discarding blocks...Done. 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1649647 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:59.865 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:59.866 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:59.866 00:15:59.866 real 0m0.190s 00:15:59.866 user 0m0.022s 00:15:59.866 sys 0m0.065s 00:15:59.866 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.866 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:59.866 ************************************ 00:15:59.866 END TEST filesystem_xfs 00:15:59.866 ************************************ 00:15:59.866 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:59.866 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:59.866 13:46:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.829 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1649647 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1649647 ']' 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1649647 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1649647 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1649647' 00:16:01.088 killing process with pid 1649647 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1649647 00:16:01.088 13:47:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1649647 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:01.348 00:16:01.348 real 0m7.071s 00:16:01.348 user 0m27.610s 00:16:01.348 sys 0m1.022s 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.348 ************************************ 00:16:01.348 END TEST nvmf_filesystem_no_in_capsule 00:16:01.348 ************************************ 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:01.348 ************************************ 00:16:01.348 START TEST nvmf_filesystem_in_capsule 00:16:01.348 ************************************ 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1651111 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1651111 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1651111 ']' 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.348 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.349 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.349 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.349 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.608 [2024-12-05 13:47:01.225305] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:16:01.608 [2024-12-05 13:47:01.225348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.608 [2024-12-05 13:47:01.298806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.608 [2024-12-05 13:47:01.319045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.608 [2024-12-05 13:47:01.319083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.608 [2024-12-05 13:47:01.319090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.608 [2024-12-05 13:47:01.319095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.608 [2024-12-05 13:47:01.319099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.608 [2024-12-05 13:47:01.320473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.608 [2024-12-05 13:47:01.320581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.608 [2024-12-05 13:47:01.320689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.608 [2024-12-05 13:47:01.320690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.608 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.868 [2024-12-05 13:47:01.478953] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x189ff30/0x18a4420) succeed. 00:16:01.868 [2024-12-05 13:47:01.487141] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18a15c0/0x18e5ac0) succeed. 00:16:01.868 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.868 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:01.868 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.868 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:02.127 Malloc1 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:02.127 [2024-12-05 13:47:01.757495] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.127 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:02.127 { 00:16:02.127 "name": "Malloc1", 00:16:02.127 "aliases": [ 00:16:02.127 "16483cb4-eb31-4d5d-8847-7ef779ccad07" 00:16:02.127 ], 00:16:02.127 "product_name": "Malloc disk", 00:16:02.127 "block_size": 512, 00:16:02.127 "num_blocks": 1048576, 00:16:02.127 "uuid": "16483cb4-eb31-4d5d-8847-7ef779ccad07", 00:16:02.127 "assigned_rate_limits": { 00:16:02.127 "rw_ios_per_sec": 0, 00:16:02.127 "rw_mbytes_per_sec": 0, 00:16:02.127 "r_mbytes_per_sec": 0, 00:16:02.127 "w_mbytes_per_sec": 0 00:16:02.127 }, 00:16:02.127 "claimed": true, 00:16:02.127 "claim_type": "exclusive_write", 00:16:02.127 "zoned": false, 00:16:02.127 "supported_io_types": { 00:16:02.127 "read": true, 00:16:02.127 "write": true, 00:16:02.127 "unmap": true, 00:16:02.127 "flush": true, 00:16:02.127 "reset": true, 00:16:02.127 "nvme_admin": false, 00:16:02.127 "nvme_io": false, 00:16:02.127 "nvme_io_md": false, 00:16:02.127 "write_zeroes": true, 00:16:02.127 "zcopy": true, 00:16:02.127 "get_zone_info": false, 00:16:02.127 "zone_management": false, 00:16:02.127 "zone_append": false, 00:16:02.127 "compare": false, 00:16:02.127 "compare_and_write": false, 00:16:02.127 "abort": true, 00:16:02.127 "seek_hole": false, 00:16:02.127 "seek_data": false, 00:16:02.127 "copy": true, 00:16:02.127 "nvme_iov_md": false 00:16:02.127 }, 00:16:02.127 "memory_domains": [ 00:16:02.127 { 00:16:02.127 "dma_device_id": "system", 00:16:02.127 "dma_device_type": 1 00:16:02.127 }, 00:16:02.127 { 00:16:02.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.127 "dma_device_type": 2 00:16:02.127 } 00:16:02.127 ], 00:16:02.127 "driver_specific": {} 00:16:02.127 } 00:16:02.127 ]' 00:16:02.128 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:02.128 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:02.128 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:02.128 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:02.128 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:02.128 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:02.128 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:02.128 13:47:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:03.064 13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:03.064 13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:03.064 13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.064 13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:03.064 13:47:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:05.600 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:05.600 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:05.600 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.600 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:05.600 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.600 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:05.600 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:05.601 13:47:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:06.168 13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:06.168 13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:06.168 13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:06.168 13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.168 13:47:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:06.428 ************************************ 00:16:06.428 START TEST filesystem_in_capsule_ext4 00:16:06.428 ************************************ 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:06.428 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:06.428 mke2fs 1.47.0 (5-Feb-2023) 00:16:06.428 Discarding device blocks: 0/522240 done 00:16:06.428 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:06.428 Filesystem UUID: e5a46988-8ed7-4065-8f73-caef1e3dec57 00:16:06.428 Superblock backups stored on blocks: 00:16:06.428 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:06.428 00:16:06.429 Allocating group tables: 0/64 done 00:16:06.429 Writing inode tables: 0/64 done 00:16:06.429 Creating journal (8192 blocks): done 00:16:06.429 Writing superblocks and filesystem accounting information: 0/64 done 00:16:06.429 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1651111 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:06.429 00:16:06.429 real 0m0.180s 00:16:06.429 user 0m0.019s 00:16:06.429 sys 0m0.068s 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:06.429 ************************************ 00:16:06.429 END TEST filesystem_in_capsule_ext4 00:16:06.429 ************************************ 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.429 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:06.689 ************************************ 00:16:06.689 START TEST filesystem_in_capsule_btrfs 00:16:06.689 ************************************ 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:06.689 btrfs-progs v6.8.1 00:16:06.689 See https://btrfs.readthedocs.io for more information. 00:16:06.689 00:16:06.689 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:06.689 NOTE: several default settings have changed in version 5.15, please make sure 00:16:06.689 this does not affect your deployments: 00:16:06.689 - DUP for metadata (-m dup) 00:16:06.689 - enabled no-holes (-O no-holes) 00:16:06.689 - enabled free-space-tree (-R free-space-tree) 00:16:06.689 00:16:06.689 Label: (null) 00:16:06.689 UUID: 1c98161c-26e6-4c65-9aae-7709dade68ed 00:16:06.689 Node size: 16384 00:16:06.689 Sector size: 4096 (CPU page size: 4096) 00:16:06.689 Filesystem size: 510.00MiB 00:16:06.689 Block group profiles: 00:16:06.689 Data: single 8.00MiB 00:16:06.689 Metadata: DUP 32.00MiB 00:16:06.689 System: DUP 8.00MiB 00:16:06.689 SSD detected: yes 00:16:06.689 Zoned device: no 00:16:06.689 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:06.689 Checksum: crc32c 00:16:06.689 Number of devices: 1 00:16:06.689 Devices: 00:16:06.689 ID SIZE PATH 00:16:06.689 1 510.00MiB /dev/nvme0n1p1 00:16:06.689 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1651111 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:06.689 00:16:06.689 real 0m0.229s 00:16:06.689 user 0m0.034s 00:16:06.689 sys 0m0.096s 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.689 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:06.689 ************************************ 00:16:06.689 END TEST filesystem_in_capsule_btrfs 00:16:06.689 ************************************ 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:06.949 ************************************ 00:16:06.949 START TEST filesystem_in_capsule_xfs 00:16:06.949 ************************************ 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:06.949 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:06.949 = sectsz=512 attr=2, projid32bit=1 00:16:06.949 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:06.949 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:06.949 data = bsize=4096 blocks=130560, imaxpct=25 00:16:06.949 = sunit=0 swidth=0 blks 00:16:06.949 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:06.949 log =internal log bsize=4096 blocks=16384, version=2 00:16:06.949 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:06.949 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:06.949 Discarding blocks...Done. 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1651111 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:06.949 00:16:06.949 real 0m0.208s 00:16:06.949 user 0m0.021s 00:16:06.949 sys 0m0.068s 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.949 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:06.949 ************************************ 00:16:06.949 END TEST filesystem_in_capsule_xfs 00:16:06.949 ************************************ 00:16:07.208 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:07.208 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:07.208 13:47:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1651111 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1651111 ']' 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1651111 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1651111 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1651111' 00:16:08.146 killing process with pid 1651111 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1651111 00:16:08.146 13:47:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1651111 00:16:08.715 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:08.715 00:16:08.715 real 0m7.110s 00:16:08.715 user 0m27.715s 00:16:08.715 sys 0m1.027s 00:16:08.715 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.715 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:08.715 ************************************ 00:16:08.715 END TEST nvmf_filesystem_in_capsule 00:16:08.715 ************************************ 00:16:08.715 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:08.715 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:08.715 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:08.716 rmmod nvme_rdma 00:16:08.716 rmmod nvme_fabrics 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:08.716 00:16:08.716 real 0m20.915s 00:16:08.716 user 0m57.418s 00:16:08.716 sys 0m6.857s 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:08.716 ************************************ 00:16:08.716 END TEST nvmf_filesystem 00:16:08.716 ************************************ 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.716 ************************************ 00:16:08.716 START TEST nvmf_target_discovery 00:16:08.716 ************************************ 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:16:08.716 * Looking for test storage... 00:16:08.716 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:08.716 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:08.975 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:08.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.976 --rc genhtml_branch_coverage=1 00:16:08.976 --rc genhtml_function_coverage=1 00:16:08.976 --rc genhtml_legend=1 00:16:08.976 --rc geninfo_all_blocks=1 00:16:08.976 --rc geninfo_unexecuted_blocks=1 00:16:08.976 00:16:08.976 ' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:08.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.976 --rc genhtml_branch_coverage=1 00:16:08.976 --rc genhtml_function_coverage=1 00:16:08.976 --rc genhtml_legend=1 00:16:08.976 --rc geninfo_all_blocks=1 00:16:08.976 --rc geninfo_unexecuted_blocks=1 00:16:08.976 00:16:08.976 ' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:08.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.976 --rc genhtml_branch_coverage=1 00:16:08.976 --rc genhtml_function_coverage=1 00:16:08.976 --rc genhtml_legend=1 00:16:08.976 --rc geninfo_all_blocks=1 00:16:08.976 --rc geninfo_unexecuted_blocks=1 00:16:08.976 00:16:08.976 ' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:08.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.976 --rc genhtml_branch_coverage=1 00:16:08.976 --rc genhtml_function_coverage=1 00:16:08.976 --rc genhtml_legend=1 00:16:08.976 --rc geninfo_all_blocks=1 00:16:08.976 --rc geninfo_unexecuted_blocks=1 00:16:08.976 00:16:08.976 ' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.976 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:08.976 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:16:08.977 13:47:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:15.550 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:15.550 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:15.551 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:15.551 Found net devices under 0000:18:00.0: mlx_0_0 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:15.551 Found net devices under 0000:18:00.1: mlx_0_1 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:15.551 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:15.551 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:15.551 altname enp24s0f0np0 00:16:15.551 altname ens785f0np0 00:16:15.551 inet 192.168.100.8/24 scope global mlx_0_0 00:16:15.551 valid_lft forever preferred_lft forever 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:15.551 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:15.551 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:15.551 altname enp24s0f1np1 00:16:15.551 altname ens785f1np1 00:16:15.551 inet 192.168.100.9/24 scope global mlx_0_1 00:16:15.551 valid_lft forever preferred_lft forever 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:15.551 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:15.552 192.168.100.9' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:15.552 192.168.100.9' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:15.552 192.168.100.9' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1655943 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1655943 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1655943 ']' 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 [2024-12-05 13:47:14.743816] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:16:15.552 [2024-12-05 13:47:14.743860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.552 [2024-12-05 13:47:14.816345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.552 [2024-12-05 13:47:14.838067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.552 [2024-12-05 13:47:14.838104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.552 [2024-12-05 13:47:14.838111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.552 [2024-12-05 13:47:14.838116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.552 [2024-12-05 13:47:14.838121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.552 [2024-12-05 13:47:14.843397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.552 [2024-12-05 13:47:14.843422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.552 [2024-12-05 13:47:14.843551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.552 [2024-12-05 13:47:14.843552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.552 13:47:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 [2024-12-05 13:47:15.001497] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xba4f30/0xba9420) succeed. 00:16:15.552 [2024-12-05 13:47:15.009740] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xba65c0/0xbeaac0) succeed. 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 Null1 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 [2024-12-05 13:47:15.169287] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 Null2 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:15.552 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 Null3 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 Null4 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:16:15.553 00:16:15.553 Discovery Log Number of Records 6, Generation counter 6 00:16:15.553 =====Discovery Log Entry 0====== 00:16:15.553 trtype: rdma 00:16:15.553 adrfam: ipv4 00:16:15.553 subtype: current discovery subsystem 00:16:15.553 treq: not required 00:16:15.553 portid: 0 00:16:15.553 trsvcid: 4420 00:16:15.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:15.553 traddr: 192.168.100.8 00:16:15.553 eflags: explicit discovery connections, duplicate discovery information 00:16:15.553 rdma_prtype: not specified 00:16:15.553 rdma_qptype: connected 00:16:15.553 rdma_cms: rdma-cm 00:16:15.553 rdma_pkey: 0x0000 00:16:15.553 =====Discovery Log Entry 1====== 00:16:15.553 trtype: rdma 00:16:15.553 adrfam: ipv4 00:16:15.553 subtype: nvme subsystem 00:16:15.553 treq: not required 00:16:15.553 portid: 0 00:16:15.553 trsvcid: 4420 00:16:15.553 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:15.553 traddr: 192.168.100.8 00:16:15.553 eflags: none 00:16:15.553 rdma_prtype: not specified 00:16:15.553 rdma_qptype: connected 00:16:15.553 rdma_cms: rdma-cm 00:16:15.553 rdma_pkey: 0x0000 00:16:15.553 =====Discovery Log Entry 2====== 00:16:15.553 trtype: rdma 00:16:15.553 adrfam: ipv4 00:16:15.553 subtype: nvme subsystem 00:16:15.553 treq: not required 00:16:15.553 portid: 0 00:16:15.553 trsvcid: 4420 00:16:15.553 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:15.553 traddr: 192.168.100.8 00:16:15.553 eflags: none 00:16:15.553 rdma_prtype: not specified 00:16:15.553 rdma_qptype: connected 00:16:15.553 rdma_cms: rdma-cm 00:16:15.553 rdma_pkey: 0x0000 00:16:15.553 =====Discovery Log Entry 3====== 00:16:15.553 trtype: rdma 00:16:15.553 adrfam: ipv4 00:16:15.553 subtype: nvme subsystem 00:16:15.553 treq: not required 00:16:15.553 portid: 0 00:16:15.553 trsvcid: 4420 00:16:15.553 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:15.553 traddr: 192.168.100.8 00:16:15.553 eflags: none 00:16:15.553 rdma_prtype: not specified 00:16:15.553 rdma_qptype: connected 00:16:15.553 rdma_cms: rdma-cm 00:16:15.553 rdma_pkey: 0x0000 00:16:15.553 =====Discovery Log Entry 4====== 00:16:15.553 trtype: rdma 00:16:15.553 adrfam: ipv4 00:16:15.553 subtype: nvme subsystem 00:16:15.553 treq: not required 00:16:15.553 portid: 0 00:16:15.553 trsvcid: 4420 00:16:15.553 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:15.553 traddr: 192.168.100.8 00:16:15.553 eflags: none 00:16:15.553 rdma_prtype: not specified 00:16:15.553 rdma_qptype: connected 00:16:15.553 rdma_cms: rdma-cm 00:16:15.553 rdma_pkey: 0x0000 00:16:15.553 =====Discovery Log Entry 5====== 00:16:15.553 trtype: rdma 00:16:15.553 adrfam: ipv4 00:16:15.553 subtype: discovery subsystem referral 00:16:15.553 treq: not required 00:16:15.553 portid: 0 00:16:15.553 trsvcid: 4430 00:16:15.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:15.553 traddr: 192.168.100.8 00:16:15.553 eflags: none 00:16:15.553 rdma_prtype: unrecognized 00:16:15.553 rdma_qptype: unrecognized 00:16:15.553 rdma_cms: unrecognized 00:16:15.553 rdma_pkey: 0x0000 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:15.553 Perform nvmf subsystem discovery via RPC 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.553 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.553 [ 00:16:15.553 { 00:16:15.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:15.553 "subtype": "Discovery", 00:16:15.553 "listen_addresses": [ 00:16:15.553 { 00:16:15.553 "trtype": "RDMA", 00:16:15.553 "adrfam": "IPv4", 00:16:15.553 "traddr": "192.168.100.8", 00:16:15.553 "trsvcid": "4420" 00:16:15.553 } 00:16:15.553 ], 00:16:15.553 "allow_any_host": true, 00:16:15.553 "hosts": [] 00:16:15.553 }, 00:16:15.553 { 00:16:15.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.553 "subtype": "NVMe", 00:16:15.553 "listen_addresses": [ 00:16:15.553 { 00:16:15.553 "trtype": "RDMA", 00:16:15.553 "adrfam": "IPv4", 00:16:15.553 "traddr": "192.168.100.8", 00:16:15.553 "trsvcid": "4420" 00:16:15.553 } 00:16:15.553 ], 00:16:15.553 "allow_any_host": true, 00:16:15.553 "hosts": [], 00:16:15.553 "serial_number": "SPDK00000000000001", 00:16:15.553 "model_number": "SPDK bdev Controller", 00:16:15.553 "max_namespaces": 32, 00:16:15.553 "min_cntlid": 1, 00:16:15.553 "max_cntlid": 65519, 00:16:15.553 "namespaces": [ 00:16:15.553 { 00:16:15.553 "nsid": 1, 00:16:15.553 "bdev_name": "Null1", 00:16:15.553 "name": "Null1", 00:16:15.553 "nguid": "16A9D20B6B6347508B1D4724D6BEA90A", 00:16:15.554 "uuid": "16a9d20b-6b63-4750-8b1d-4724d6bea90a" 00:16:15.554 } 00:16:15.554 ] 00:16:15.554 }, 00:16:15.554 { 00:16:15.554 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:15.554 "subtype": "NVMe", 00:16:15.554 "listen_addresses": [ 00:16:15.812 { 00:16:15.812 "trtype": "RDMA", 00:16:15.812 "adrfam": "IPv4", 00:16:15.812 "traddr": "192.168.100.8", 00:16:15.812 "trsvcid": "4420" 00:16:15.812 } 00:16:15.812 ], 00:16:15.812 "allow_any_host": true, 00:16:15.812 "hosts": [], 00:16:15.812 "serial_number": "SPDK00000000000002", 00:16:15.812 "model_number": "SPDK bdev Controller", 00:16:15.812 "max_namespaces": 32, 00:16:15.812 "min_cntlid": 1, 00:16:15.812 "max_cntlid": 65519, 00:16:15.812 "namespaces": [ 00:16:15.812 { 00:16:15.812 "nsid": 1, 00:16:15.812 "bdev_name": "Null2", 00:16:15.812 "name": "Null2", 00:16:15.812 "nguid": "A3D707C9D2FC4BB0AE927F2175BA8CAF", 00:16:15.812 "uuid": "a3d707c9-d2fc-4bb0-ae92-7f2175ba8caf" 00:16:15.812 } 00:16:15.812 ] 00:16:15.812 }, 00:16:15.812 { 00:16:15.812 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:15.812 "subtype": "NVMe", 00:16:15.812 "listen_addresses": [ 00:16:15.812 { 00:16:15.812 "trtype": "RDMA", 00:16:15.812 "adrfam": "IPv4", 00:16:15.812 "traddr": "192.168.100.8", 00:16:15.812 "trsvcid": "4420" 00:16:15.812 } 00:16:15.812 ], 00:16:15.812 "allow_any_host": true, 00:16:15.812 "hosts": [], 00:16:15.812 "serial_number": "SPDK00000000000003", 00:16:15.812 "model_number": "SPDK bdev Controller", 00:16:15.812 "max_namespaces": 32, 00:16:15.812 "min_cntlid": 1, 00:16:15.812 "max_cntlid": 65519, 00:16:15.812 "namespaces": [ 00:16:15.812 { 00:16:15.812 "nsid": 1, 00:16:15.812 "bdev_name": "Null3", 00:16:15.812 "name": "Null3", 00:16:15.812 "nguid": "4593A6F8057D4FBF8AEB5BD8CEF2F8DA", 00:16:15.812 "uuid": "4593a6f8-057d-4fbf-8aeb-5bd8cef2f8da" 00:16:15.812 } 00:16:15.812 ] 00:16:15.812 }, 00:16:15.812 { 00:16:15.812 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:15.812 "subtype": "NVMe", 00:16:15.812 "listen_addresses": [ 00:16:15.812 { 00:16:15.812 "trtype": "RDMA", 00:16:15.812 "adrfam": "IPv4", 00:16:15.812 "traddr": "192.168.100.8", 00:16:15.812 "trsvcid": "4420" 00:16:15.812 } 00:16:15.812 ], 00:16:15.812 "allow_any_host": true, 00:16:15.812 "hosts": [], 00:16:15.812 "serial_number": "SPDK00000000000004", 00:16:15.812 "model_number": "SPDK bdev Controller", 00:16:15.812 "max_namespaces": 32, 00:16:15.812 "min_cntlid": 1, 00:16:15.812 "max_cntlid": 65519, 00:16:15.812 "namespaces": [ 00:16:15.812 { 00:16:15.812 "nsid": 1, 00:16:15.812 "bdev_name": "Null4", 00:16:15.812 "name": "Null4", 00:16:15.812 "nguid": "17C1DCED20E941BEB15C8EC3EFE52CBF", 00:16:15.812 "uuid": "17c1dced-20e9-41be-b15c-8ec3efe52cbf" 00:16:15.812 } 00:16:15.812 ] 00:16:15.812 } 00:16:15.812 ] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.812 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:15.813 rmmod nvme_rdma 00:16:15.813 rmmod nvme_fabrics 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1655943 ']' 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1655943 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1655943 ']' 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1655943 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1655943 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1655943' 00:16:15.813 killing process with pid 1655943 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1655943 00:16:15.813 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1655943 00:16:16.072 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:16.072 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:16.072 00:16:16.072 real 0m7.431s 00:16:16.072 user 0m5.886s 00:16:16.072 sys 0m4.970s 00:16:16.072 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.072 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.072 ************************************ 00:16:16.072 END TEST nvmf_target_discovery 00:16:16.072 ************************************ 00:16:16.072 13:47:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:16:16.072 13:47:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:16.072 13:47:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.072 13:47:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.332 ************************************ 00:16:16.332 START TEST nvmf_referrals 00:16:16.332 ************************************ 00:16:16.332 13:47:15 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:16:16.332 * Looking for test storage... 00:16:16.332 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:16.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.332 --rc genhtml_branch_coverage=1 00:16:16.332 --rc genhtml_function_coverage=1 00:16:16.332 --rc genhtml_legend=1 00:16:16.332 --rc geninfo_all_blocks=1 00:16:16.332 --rc geninfo_unexecuted_blocks=1 00:16:16.332 00:16:16.332 ' 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:16.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.332 --rc genhtml_branch_coverage=1 00:16:16.332 --rc genhtml_function_coverage=1 00:16:16.332 --rc genhtml_legend=1 00:16:16.332 --rc geninfo_all_blocks=1 00:16:16.332 --rc geninfo_unexecuted_blocks=1 00:16:16.332 00:16:16.332 ' 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:16.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.332 --rc genhtml_branch_coverage=1 00:16:16.332 --rc genhtml_function_coverage=1 00:16:16.332 --rc genhtml_legend=1 00:16:16.332 --rc geninfo_all_blocks=1 00:16:16.332 --rc geninfo_unexecuted_blocks=1 00:16:16.332 00:16:16.332 ' 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:16.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.332 --rc genhtml_branch_coverage=1 00:16:16.332 --rc genhtml_function_coverage=1 00:16:16.332 --rc genhtml_legend=1 00:16:16.332 --rc geninfo_all_blocks=1 00:16:16.332 --rc geninfo_unexecuted_blocks=1 00:16:16.332 00:16:16.332 ' 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.332 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:16.333 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:16.333 13:47:16 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:22.907 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:22.907 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:22.908 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:22.908 Found net devices under 0000:18:00.0: mlx_0_0 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:22.908 Found net devices under 0000:18:00.1: mlx_0_1 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:22.908 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:22.908 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:22.908 altname enp24s0f0np0 00:16:22.908 altname ens785f0np0 00:16:22.908 inet 192.168.100.8/24 scope global mlx_0_0 00:16:22.908 valid_lft forever preferred_lft forever 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:22.908 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:22.908 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:22.908 altname enp24s0f1np1 00:16:22.908 altname ens785f1np1 00:16:22.908 inet 192.168.100.9/24 scope global mlx_0_1 00:16:22.908 valid_lft forever preferred_lft forever 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:22.908 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:22.909 192.168.100.9' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:22.909 192.168.100.9' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:22.909 192.168.100.9' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1659550 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1659550 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1659550 ']' 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 [2024-12-05 13:47:22.285487] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:16:22.909 [2024-12-05 13:47:22.285531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.909 [2024-12-05 13:47:22.360505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.909 [2024-12-05 13:47:22.382380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.909 [2024-12-05 13:47:22.382417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.909 [2024-12-05 13:47:22.382423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.909 [2024-12-05 13:47:22.382430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.909 [2024-12-05 13:47:22.382434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.909 [2024-12-05 13:47:22.383795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.909 [2024-12-05 13:47:22.383904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.909 [2024-12-05 13:47:22.383986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.909 [2024-12-05 13:47:22.383987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 [2024-12-05 13:47:22.541178] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1190f30/0x1195420) succeed. 00:16:22.909 [2024-12-05 13:47:22.549317] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11925c0/0x11d6ac0) succeed. 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 [2024-12-05 13:47:22.684118] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:23.169 13:47:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:23.428 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:23.687 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:23.945 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:24.203 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:24.204 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:24.204 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:24.204 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.204 13:47:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:24.204 rmmod nvme_rdma 00:16:24.204 rmmod nvme_fabrics 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1659550 ']' 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1659550 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1659550 ']' 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1659550 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.204 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659550 00:16:24.462 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.462 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.462 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659550' 00:16:24.462 killing process with pid 1659550 00:16:24.462 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1659550 00:16:24.462 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1659550 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:24.722 00:16:24.722 real 0m8.373s 00:16:24.722 user 0m10.119s 00:16:24.722 sys 0m5.370s 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.722 ************************************ 00:16:24.722 END TEST nvmf_referrals 00:16:24.722 ************************************ 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.722 ************************************ 00:16:24.722 START TEST nvmf_connect_disconnect 00:16:24.722 ************************************ 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:16:24.722 * Looking for test storage... 00:16:24.722 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:24.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.722 --rc genhtml_branch_coverage=1 00:16:24.722 --rc genhtml_function_coverage=1 00:16:24.722 --rc genhtml_legend=1 00:16:24.722 --rc geninfo_all_blocks=1 00:16:24.722 --rc geninfo_unexecuted_blocks=1 00:16:24.722 00:16:24.722 ' 00:16:24.722 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:24.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.722 --rc genhtml_branch_coverage=1 00:16:24.722 --rc genhtml_function_coverage=1 00:16:24.722 --rc genhtml_legend=1 00:16:24.722 --rc geninfo_all_blocks=1 00:16:24.723 --rc geninfo_unexecuted_blocks=1 00:16:24.723 00:16:24.723 ' 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:24.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.723 --rc genhtml_branch_coverage=1 00:16:24.723 --rc genhtml_function_coverage=1 00:16:24.723 --rc genhtml_legend=1 00:16:24.723 --rc geninfo_all_blocks=1 00:16:24.723 --rc geninfo_unexecuted_blocks=1 00:16:24.723 00:16:24.723 ' 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:24.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.723 --rc genhtml_branch_coverage=1 00:16:24.723 --rc genhtml_function_coverage=1 00:16:24.723 --rc genhtml_legend=1 00:16:24.723 --rc geninfo_all_blocks=1 00:16:24.723 --rc geninfo_unexecuted_blocks=1 00:16:24.723 00:16:24.723 ' 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:24.723 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:24.983 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:24.983 13:47:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:31.550 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:31.551 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:31.551 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:31.551 Found net devices under 0000:18:00.0: mlx_0_0 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:31.551 Found net devices under 0000:18:00.1: mlx_0_1 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:31.551 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:31.551 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:31.551 altname enp24s0f0np0 00:16:31.551 altname ens785f0np0 00:16:31.551 inet 192.168.100.8/24 scope global mlx_0_0 00:16:31.551 valid_lft forever preferred_lft forever 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:31.551 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:31.551 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:31.551 altname enp24s0f1np1 00:16:31.551 altname ens785f1np1 00:16:31.551 inet 192.168.100.9/24 scope global mlx_0_1 00:16:31.551 valid_lft forever preferred_lft forever 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:31.551 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:31.552 192.168.100.9' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:31.552 192.168.100.9' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:31.552 192.168.100.9' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1663289 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1663289 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1663289 ']' 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.552 [2024-12-05 13:47:30.690057] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:16:31.552 [2024-12-05 13:47:30.690104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.552 [2024-12-05 13:47:30.764073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.552 [2024-12-05 13:47:30.787268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.552 [2024-12-05 13:47:30.787307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.552 [2024-12-05 13:47:30.787313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.552 [2024-12-05 13:47:30.787319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.552 [2024-12-05 13:47:30.787324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.552 [2024-12-05 13:47:30.788728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.552 [2024-12-05 13:47:30.788836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.552 [2024-12-05 13:47:30.788862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.552 [2024-12-05 13:47:30.788862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.552 13:47:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.552 [2024-12-05 13:47:30.928601] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:16:31.552 [2024-12-05 13:47:30.947245] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x967f30/0x96c420) succeed. 00:16:31.552 [2024-12-05 13:47:30.955436] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9695c0/0x9adac0) succeed. 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.552 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:31.553 [2024-12-05 13:47:31.101356] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:31.553 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.553 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:16:31.553 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:16:31.553 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:16:31.553 13:47:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:34.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:49.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:18.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:23.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:39.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:42.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:46.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:48.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:51.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:55.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:58.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:01.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:04.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:07.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:11.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:16.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:20.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:23.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:26.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:29.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:32.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:35.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:39.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:41.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:44.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:48.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:51.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:54.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:57.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:00.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:03.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:07.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:10.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:13.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:16.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:19.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:23.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:25.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:28.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:32.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:35.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:38.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:41.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:44.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:44.536 rmmod nvme_rdma 00:21:44.536 rmmod nvme_fabrics 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1663289 ']' 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1663289 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1663289 ']' 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1663289 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663289 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663289' 00:21:44.536 killing process with pid 1663289 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1663289 00:21:44.536 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1663289 00:21:44.795 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:44.795 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:44.795 00:21:44.795 real 5m20.065s 00:21:44.795 user 20m50.998s 00:21:44.795 sys 0m15.349s 00:21:44.795 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.795 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:44.796 ************************************ 00:21:44.796 END TEST nvmf_connect_disconnect 00:21:44.796 ************************************ 00:21:44.796 13:52:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:21:44.796 13:52:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.796 13:52:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.796 13:52:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:44.796 ************************************ 00:21:44.796 START TEST nvmf_multitarget 00:21:44.796 ************************************ 00:21:44.796 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:21:44.796 * Looking for test storage... 00:21:44.796 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:44.796 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:44.796 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:21:44.796 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:45.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.056 --rc genhtml_branch_coverage=1 00:21:45.056 --rc genhtml_function_coverage=1 00:21:45.056 --rc genhtml_legend=1 00:21:45.056 --rc geninfo_all_blocks=1 00:21:45.056 --rc geninfo_unexecuted_blocks=1 00:21:45.056 00:21:45.056 ' 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:45.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.056 --rc genhtml_branch_coverage=1 00:21:45.056 --rc genhtml_function_coverage=1 00:21:45.056 --rc genhtml_legend=1 00:21:45.056 --rc geninfo_all_blocks=1 00:21:45.056 --rc geninfo_unexecuted_blocks=1 00:21:45.056 00:21:45.056 ' 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:45.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.056 --rc genhtml_branch_coverage=1 00:21:45.056 --rc genhtml_function_coverage=1 00:21:45.056 --rc genhtml_legend=1 00:21:45.056 --rc geninfo_all_blocks=1 00:21:45.056 --rc geninfo_unexecuted_blocks=1 00:21:45.056 00:21:45.056 ' 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:45.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.056 --rc genhtml_branch_coverage=1 00:21:45.056 --rc genhtml_function_coverage=1 00:21:45.056 --rc genhtml_legend=1 00:21:45.056 --rc geninfo_all_blocks=1 00:21:45.056 --rc geninfo_unexecuted_blocks=1 00:21:45.056 00:21:45.056 ' 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.056 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.057 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.057 13:52:44 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.624 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:51.625 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:51.625 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:51.625 Found net devices under 0000:18:00.0: mlx_0_0 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:51.625 Found net devices under 0000:18:00.1: mlx_0_1 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:51.625 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:51.625 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:21:51.625 altname enp24s0f0np0 00:21:51.625 altname ens785f0np0 00:21:51.625 inet 192.168.100.8/24 scope global mlx_0_0 00:21:51.625 valid_lft forever preferred_lft forever 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:51.625 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:51.625 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:21:51.625 altname enp24s0f1np1 00:21:51.625 altname ens785f1np1 00:21:51.625 inet 192.168.100.9/24 scope global mlx_0_1 00:21:51.625 valid_lft forever preferred_lft forever 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:51.625 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:51.626 192.168.100.9' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:51.626 192.168.100.9' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:51.626 192.168.100.9' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1725417 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1725417 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1725417 ']' 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:51.626 [2024-12-05 13:52:50.796098] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:51.626 [2024-12-05 13:52:50.796141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.626 [2024-12-05 13:52:50.868229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.626 [2024-12-05 13:52:50.889667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.626 [2024-12-05 13:52:50.889706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.626 [2024-12-05 13:52:50.889712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.626 [2024-12-05 13:52:50.889718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.626 [2024-12-05 13:52:50.889722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.626 [2024-12-05 13:52:50.890925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.626 [2024-12-05 13:52:50.891031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.626 [2024-12-05 13:52:50.891138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.626 [2024-12-05 13:52:50.891140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.626 13:52:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:51.626 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.626 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:51.626 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:51.626 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:21:51.626 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:21:51.626 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:21:51.626 "nvmf_tgt_1" 00:21:51.626 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:21:51.626 "nvmf_tgt_2" 00:21:51.627 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:51.627 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:21:51.627 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:21:51.627 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:21:51.885 true 00:21:51.885 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:21:51.885 true 00:21:51.885 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:51.885 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.143 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:52.143 rmmod nvme_rdma 00:21:52.143 rmmod nvme_fabrics 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1725417 ']' 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1725417 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1725417 ']' 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1725417 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1725417 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1725417' 00:21:52.144 killing process with pid 1725417 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1725417 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1725417 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:52.144 00:21:52.144 real 0m7.460s 00:21:52.144 user 0m6.814s 00:21:52.144 sys 0m4.976s 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.144 13:52:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:21:52.144 ************************************ 00:21:52.144 END TEST nvmf_multitarget 00:21:52.144 ************************************ 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:52.404 ************************************ 00:21:52.404 START TEST nvmf_rpc 00:21:52.404 ************************************ 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:21:52.404 * Looking for test storage... 00:21:52.404 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.404 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:52.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.405 --rc genhtml_branch_coverage=1 00:21:52.405 --rc genhtml_function_coverage=1 00:21:52.405 --rc genhtml_legend=1 00:21:52.405 --rc geninfo_all_blocks=1 00:21:52.405 --rc geninfo_unexecuted_blocks=1 00:21:52.405 00:21:52.405 ' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:52.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.405 --rc genhtml_branch_coverage=1 00:21:52.405 --rc genhtml_function_coverage=1 00:21:52.405 --rc genhtml_legend=1 00:21:52.405 --rc geninfo_all_blocks=1 00:21:52.405 --rc geninfo_unexecuted_blocks=1 00:21:52.405 00:21:52.405 ' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:52.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.405 --rc genhtml_branch_coverage=1 00:21:52.405 --rc genhtml_function_coverage=1 00:21:52.405 --rc genhtml_legend=1 00:21:52.405 --rc geninfo_all_blocks=1 00:21:52.405 --rc geninfo_unexecuted_blocks=1 00:21:52.405 00:21:52.405 ' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:52.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.405 --rc genhtml_branch_coverage=1 00:21:52.405 --rc genhtml_function_coverage=1 00:21:52.405 --rc genhtml_legend=1 00:21:52.405 --rc geninfo_all_blocks=1 00:21:52.405 --rc geninfo_unexecuted_blocks=1 00:21:52.405 00:21:52.405 ' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.405 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.405 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.665 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:52.665 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:52.665 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.665 13:52:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:59.228 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:59.228 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:59.228 Found net devices under 0000:18:00.0: mlx_0_0 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.228 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:59.228 Found net devices under 0000:18:00.1: mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:59.229 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:59.229 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:21:59.229 altname enp24s0f0np0 00:21:59.229 altname ens785f0np0 00:21:59.229 inet 192.168.100.8/24 scope global mlx_0_0 00:21:59.229 valid_lft forever preferred_lft forever 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:59.229 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:59.229 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:21:59.229 altname enp24s0f1np1 00:21:59.229 altname ens785f1np1 00:21:59.229 inet 192.168.100.9/24 scope global mlx_0_1 00:21:59.229 valid_lft forever preferred_lft forever 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:59.229 192.168.100.9' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:59.229 192.168.100.9' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:59.229 192.168.100.9' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1728959 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1728959 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1728959 ']' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.229 [2024-12-05 13:52:58.313559] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:59.229 [2024-12-05 13:52:58.313613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.229 [2024-12-05 13:52:58.389071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.229 [2024-12-05 13:52:58.411853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.229 [2024-12-05 13:52:58.411891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.229 [2024-12-05 13:52:58.411898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.229 [2024-12-05 13:52:58.411903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.229 [2024-12-05 13:52:58.411907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.229 [2024-12-05 13:52:58.413273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.229 [2024-12-05 13:52:58.413403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.229 [2024-12-05 13:52:58.413464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.229 [2024-12-05 13:52:58.413465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:21:59.229 "tick_rate": 2700000000, 00:21:59.229 "poll_groups": [ 00:21:59.229 { 00:21:59.229 "name": "nvmf_tgt_poll_group_000", 00:21:59.229 "admin_qpairs": 0, 00:21:59.229 "io_qpairs": 0, 00:21:59.229 "current_admin_qpairs": 0, 00:21:59.229 "current_io_qpairs": 0, 00:21:59.229 "pending_bdev_io": 0, 00:21:59.229 "completed_nvme_io": 0, 00:21:59.229 "transports": [] 00:21:59.229 }, 00:21:59.229 { 00:21:59.229 "name": "nvmf_tgt_poll_group_001", 00:21:59.229 "admin_qpairs": 0, 00:21:59.229 "io_qpairs": 0, 00:21:59.229 "current_admin_qpairs": 0, 00:21:59.229 "current_io_qpairs": 0, 00:21:59.229 "pending_bdev_io": 0, 00:21:59.229 "completed_nvme_io": 0, 00:21:59.229 "transports": [] 00:21:59.229 }, 00:21:59.229 { 00:21:59.229 "name": "nvmf_tgt_poll_group_002", 00:21:59.229 "admin_qpairs": 0, 00:21:59.229 "io_qpairs": 0, 00:21:59.229 "current_admin_qpairs": 0, 00:21:59.229 "current_io_qpairs": 0, 00:21:59.229 "pending_bdev_io": 0, 00:21:59.229 "completed_nvme_io": 0, 00:21:59.229 "transports": [] 00:21:59.229 }, 00:21:59.229 { 00:21:59.229 "name": "nvmf_tgt_poll_group_003", 00:21:59.229 "admin_qpairs": 0, 00:21:59.229 "io_qpairs": 0, 00:21:59.229 "current_admin_qpairs": 0, 00:21:59.229 "current_io_qpairs": 0, 00:21:59.229 "pending_bdev_io": 0, 00:21:59.229 "completed_nvme_io": 0, 00:21:59.229 "transports": [] 00:21:59.229 } 00:21:59.229 ] 00:21:59.229 }' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:21:59.229 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.230 [2024-12-05 13:52:58.669655] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1edcf90/0x1ee1480) succeed. 00:21:59.230 [2024-12-05 13:52:58.678087] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ede620/0x1f22b20) succeed. 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:21:59.230 "tick_rate": 2700000000, 00:21:59.230 "poll_groups": [ 00:21:59.230 { 00:21:59.230 "name": "nvmf_tgt_poll_group_000", 00:21:59.230 "admin_qpairs": 0, 00:21:59.230 "io_qpairs": 0, 00:21:59.230 "current_admin_qpairs": 0, 00:21:59.230 "current_io_qpairs": 0, 00:21:59.230 "pending_bdev_io": 0, 00:21:59.230 "completed_nvme_io": 0, 00:21:59.230 "transports": [ 00:21:59.230 { 00:21:59.230 "trtype": "RDMA", 00:21:59.230 "pending_data_buffer": 0, 00:21:59.230 "devices": [ 00:21:59.230 { 00:21:59.230 "name": "mlx5_0", 00:21:59.230 "polls": 15099, 00:21:59.230 "idle_polls": 15099, 00:21:59.230 "completions": 0, 00:21:59.230 "requests": 0, 00:21:59.230 "request_latency": 0, 00:21:59.230 "pending_free_request": 0, 00:21:59.230 "pending_rdma_read": 0, 00:21:59.230 "pending_rdma_write": 0, 00:21:59.230 "pending_rdma_send": 0, 00:21:59.230 "total_send_wrs": 0, 00:21:59.230 "send_doorbell_updates": 0, 00:21:59.230 "total_recv_wrs": 4096, 00:21:59.230 "recv_doorbell_updates": 1 00:21:59.230 }, 00:21:59.230 { 00:21:59.230 "name": "mlx5_1", 00:21:59.230 "polls": 15099, 00:21:59.230 "idle_polls": 15099, 00:21:59.230 "completions": 0, 00:21:59.230 "requests": 0, 00:21:59.230 "request_latency": 0, 00:21:59.230 "pending_free_request": 0, 00:21:59.230 "pending_rdma_read": 0, 00:21:59.230 "pending_rdma_write": 0, 00:21:59.230 "pending_rdma_send": 0, 00:21:59.230 "total_send_wrs": 0, 00:21:59.230 "send_doorbell_updates": 0, 00:21:59.230 "total_recv_wrs": 4096, 00:21:59.230 "recv_doorbell_updates": 1 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 }, 00:21:59.230 { 00:21:59.230 "name": "nvmf_tgt_poll_group_001", 00:21:59.230 "admin_qpairs": 0, 00:21:59.230 "io_qpairs": 0, 00:21:59.230 "current_admin_qpairs": 0, 00:21:59.230 "current_io_qpairs": 0, 00:21:59.230 "pending_bdev_io": 0, 00:21:59.230 "completed_nvme_io": 0, 00:21:59.230 "transports": [ 00:21:59.230 { 00:21:59.230 "trtype": "RDMA", 00:21:59.230 "pending_data_buffer": 0, 00:21:59.230 "devices": [ 00:21:59.230 { 00:21:59.230 "name": "mlx5_0", 00:21:59.230 "polls": 9459, 00:21:59.230 "idle_polls": 9459, 00:21:59.230 "completions": 0, 00:21:59.230 "requests": 0, 00:21:59.230 "request_latency": 0, 00:21:59.230 "pending_free_request": 0, 00:21:59.230 "pending_rdma_read": 0, 00:21:59.230 "pending_rdma_write": 0, 00:21:59.230 "pending_rdma_send": 0, 00:21:59.230 "total_send_wrs": 0, 00:21:59.230 "send_doorbell_updates": 0, 00:21:59.230 "total_recv_wrs": 4096, 00:21:59.230 "recv_doorbell_updates": 1 00:21:59.230 }, 00:21:59.230 { 00:21:59.230 "name": "mlx5_1", 00:21:59.230 "polls": 9459, 00:21:59.230 "idle_polls": 9459, 00:21:59.230 "completions": 0, 00:21:59.230 "requests": 0, 00:21:59.230 "request_latency": 0, 00:21:59.230 "pending_free_request": 0, 00:21:59.230 "pending_rdma_read": 0, 00:21:59.230 "pending_rdma_write": 0, 00:21:59.230 "pending_rdma_send": 0, 00:21:59.230 "total_send_wrs": 0, 00:21:59.230 "send_doorbell_updates": 0, 00:21:59.230 "total_recv_wrs": 4096, 00:21:59.230 "recv_doorbell_updates": 1 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 }, 00:21:59.230 { 00:21:59.230 "name": "nvmf_tgt_poll_group_002", 00:21:59.230 "admin_qpairs": 0, 00:21:59.230 "io_qpairs": 0, 00:21:59.230 "current_admin_qpairs": 0, 00:21:59.230 "current_io_qpairs": 0, 00:21:59.230 "pending_bdev_io": 0, 00:21:59.230 "completed_nvme_io": 0, 00:21:59.230 "transports": [ 00:21:59.230 { 00:21:59.230 "trtype": "RDMA", 00:21:59.230 "pending_data_buffer": 0, 00:21:59.230 "devices": [ 00:21:59.230 { 00:21:59.230 "name": "mlx5_0", 00:21:59.230 "polls": 5390, 00:21:59.230 "idle_polls": 5390, 00:21:59.230 "completions": 0, 00:21:59.230 "requests": 0, 00:21:59.230 "request_latency": 0, 00:21:59.230 "pending_free_request": 0, 00:21:59.230 "pending_rdma_read": 0, 00:21:59.230 "pending_rdma_write": 0, 00:21:59.230 "pending_rdma_send": 0, 00:21:59.230 "total_send_wrs": 0, 00:21:59.230 "send_doorbell_updates": 0, 00:21:59.230 "total_recv_wrs": 4096, 00:21:59.230 "recv_doorbell_updates": 1 00:21:59.230 }, 00:21:59.230 { 00:21:59.230 "name": "mlx5_1", 00:21:59.230 "polls": 5390, 00:21:59.230 "idle_polls": 5390, 00:21:59.230 "completions": 0, 00:21:59.230 "requests": 0, 00:21:59.230 "request_latency": 0, 00:21:59.230 "pending_free_request": 0, 00:21:59.230 "pending_rdma_read": 0, 00:21:59.230 "pending_rdma_write": 0, 00:21:59.230 "pending_rdma_send": 0, 00:21:59.230 "total_send_wrs": 0, 00:21:59.230 "send_doorbell_updates": 0, 00:21:59.230 "total_recv_wrs": 4096, 00:21:59.230 "recv_doorbell_updates": 1 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 }, 00:21:59.230 { 00:21:59.230 "name": "nvmf_tgt_poll_group_003", 00:21:59.230 "admin_qpairs": 0, 00:21:59.230 "io_qpairs": 0, 00:21:59.230 "current_admin_qpairs": 0, 00:21:59.230 "current_io_qpairs": 0, 00:21:59.230 "pending_bdev_io": 0, 00:21:59.230 "completed_nvme_io": 0, 00:21:59.230 "transports": [ 00:21:59.230 { 00:21:59.230 "trtype": "RDMA", 00:21:59.230 "pending_data_buffer": 0, 00:21:59.230 "devices": [ 00:21:59.230 { 00:21:59.230 "name": "mlx5_0", 00:21:59.230 "polls": 936, 00:21:59.230 "idle_polls": 936, 00:21:59.230 "completions": 0, 00:21:59.230 "requests": 0, 00:21:59.230 "request_latency": 0, 00:21:59.230 "pending_free_request": 0, 00:21:59.230 "pending_rdma_read": 0, 00:21:59.230 "pending_rdma_write": 0, 00:21:59.230 "pending_rdma_send": 0, 00:21:59.230 "total_send_wrs": 0, 00:21:59.230 "send_doorbell_updates": 0, 00:21:59.230 "total_recv_wrs": 4096, 00:21:59.230 "recv_doorbell_updates": 1 00:21:59.230 }, 00:21:59.230 { 00:21:59.230 "name": "mlx5_1", 00:21:59.230 "polls": 936, 00:21:59.230 "idle_polls": 936, 00:21:59.230 "completions": 0, 00:21:59.230 "requests": 0, 00:21:59.230 "request_latency": 0, 00:21:59.230 "pending_free_request": 0, 00:21:59.230 "pending_rdma_read": 0, 00:21:59.230 "pending_rdma_write": 0, 00:21:59.230 "pending_rdma_send": 0, 00:21:59.230 "total_send_wrs": 0, 00:21:59.230 "send_doorbell_updates": 0, 00:21:59.230 "total_recv_wrs": 4096, 00:21:59.230 "recv_doorbell_updates": 1 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 } 00:21:59.230 ] 00:21:59.230 }' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:21:59.230 13:52:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.230 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.489 Malloc1 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.489 [2024-12-05 13:52:59.111340] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:21:59.489 [2024-12-05 13:52:59.151205] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:21:59.489 Failed to write to /dev/nvme-fabrics: Input/output error 00:21:59.489 could not add new controller: failed to write to nvme-fabrics device 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.489 13:52:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:00.424 13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:22:00.424 13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:22:00.424 13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:00.424 13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:00.424 13:53:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:22:02.949 13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:02.949 13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:02.949 13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:22:02.949 13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:02.949 13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:02.949 13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:22:02.949 13:53:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:03.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:22:03.516 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:03.517 [2024-12-05 13:53:03.242853] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:22:03.517 Failed to write to /dev/nvme-fabrics: Input/output error 00:22:03.517 could not add new controller: failed to write to nvme-fabrics device 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.517 13:53:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:04.450 13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:22:04.450 13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:22:04.450 13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.450 13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:04.450 13:53:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:22:06.980 13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:06.980 13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:06.980 13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:22:06.980 13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:06.980 13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.980 13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:22:06.980 13:53:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:07.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.547 [2024-12-05 13:53:07.307057] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.547 13:53:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:08.482 13:53:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:22:08.482 13:53:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:22:08.482 13:53:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.482 13:53:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:08.482 13:53:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:22:11.013 13:53:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:11.013 13:53:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:11.013 13:53:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:22:11.013 13:53:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:11.013 13:53:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:11.013 13:53:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:22:11.013 13:53:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:11.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:11.579 [2024-12-05 13:53:11.316824] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.579 13:53:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:12.514 13:53:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:22:12.514 13:53:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:22:12.514 13:53:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.514 13:53:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:12.514 13:53:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:22:15.044 13:53:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:15.044 13:53:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:15.044 13:53:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:22:15.044 13:53:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:15.044 13:53:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.044 13:53:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:22:15.045 13:53:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:15.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.612 [2024-12-05 13:53:15.344924] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.612 13:53:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:16.656 13:53:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:22:16.656 13:53:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:22:16.656 13:53:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:16.656 13:53:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:16.656 13:53:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:22:18.559 13:53:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:18.559 13:53:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:18.559 13:53:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:22:18.559 13:53:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:18.559 13:53:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.559 13:53:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:22:18.559 13:53:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:19.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:19.495 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.496 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:19.496 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.496 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:19.753 [2024-12-05 13:53:19.347919] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.753 13:53:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:20.685 13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:22:20.685 13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:22:20.685 13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:20.685 13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:20.685 13:53:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:22:22.589 13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:22.589 13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:22.589 13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:22:22.589 13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:22.590 13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:22.590 13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:22:22.590 13:53:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:23.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:23.526 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.527 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:23.527 [2024-12-05 13:53:23.378133] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.785 13:53:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:24.722 13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:22:24.722 13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:22:24.722 13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:22:24.722 13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:22:24.722 13:53:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:22:26.621 13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:22:26.621 13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:22:26.621 13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:22:26.621 13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:22:26.621 13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:22:26.621 13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:22:26.621 13:53:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:27.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.555 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 [2024-12-05 13:53:27.407887] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 [2024-12-05 13:53:27.456046] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.813 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 [2024-12-05 13:53:27.504219] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 [2024-12-05 13:53:27.552419] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.814 [2024-12-05 13:53:27.600567] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.814 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.815 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.072 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.072 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:22:28.072 "tick_rate": 2700000000, 00:22:28.072 "poll_groups": [ 00:22:28.072 { 00:22:28.072 "name": "nvmf_tgt_poll_group_000", 00:22:28.072 "admin_qpairs": 2, 00:22:28.072 "io_qpairs": 27, 00:22:28.072 "current_admin_qpairs": 0, 00:22:28.072 "current_io_qpairs": 0, 00:22:28.072 "pending_bdev_io": 0, 00:22:28.072 "completed_nvme_io": 78, 00:22:28.072 "transports": [ 00:22:28.072 { 00:22:28.072 "trtype": "RDMA", 00:22:28.072 "pending_data_buffer": 0, 00:22:28.072 "devices": [ 00:22:28.072 { 00:22:28.072 "name": "mlx5_0", 00:22:28.072 "polls": 3713589, 00:22:28.072 "idle_polls": 3713341, 00:22:28.072 "completions": 269, 00:22:28.072 "requests": 134, 00:22:28.072 "request_latency": 23291972, 00:22:28.072 "pending_free_request": 0, 00:22:28.072 "pending_rdma_read": 0, 00:22:28.072 "pending_rdma_write": 0, 00:22:28.072 "pending_rdma_send": 0, 00:22:28.072 "total_send_wrs": 211, 00:22:28.072 "send_doorbell_updates": 123, 00:22:28.072 "total_recv_wrs": 4230, 00:22:28.072 "recv_doorbell_updates": 123 00:22:28.072 }, 00:22:28.072 { 00:22:28.072 "name": "mlx5_1", 00:22:28.072 "polls": 3713589, 00:22:28.072 "idle_polls": 3713589, 00:22:28.072 "completions": 0, 00:22:28.072 "requests": 0, 00:22:28.072 "request_latency": 0, 00:22:28.072 "pending_free_request": 0, 00:22:28.072 "pending_rdma_read": 0, 00:22:28.072 "pending_rdma_write": 0, 00:22:28.072 "pending_rdma_send": 0, 00:22:28.072 "total_send_wrs": 0, 00:22:28.072 "send_doorbell_updates": 0, 00:22:28.072 "total_recv_wrs": 4096, 00:22:28.072 "recv_doorbell_updates": 1 00:22:28.072 } 00:22:28.072 ] 00:22:28.072 } 00:22:28.072 ] 00:22:28.072 }, 00:22:28.072 { 00:22:28.072 "name": "nvmf_tgt_poll_group_001", 00:22:28.072 "admin_qpairs": 2, 00:22:28.072 "io_qpairs": 26, 00:22:28.072 "current_admin_qpairs": 0, 00:22:28.072 "current_io_qpairs": 0, 00:22:28.072 "pending_bdev_io": 0, 00:22:28.072 "completed_nvme_io": 127, 00:22:28.072 "transports": [ 00:22:28.072 { 00:22:28.072 "trtype": "RDMA", 00:22:28.072 "pending_data_buffer": 0, 00:22:28.072 "devices": [ 00:22:28.072 { 00:22:28.072 "name": "mlx5_0", 00:22:28.072 "polls": 3609938, 00:22:28.072 "idle_polls": 3609613, 00:22:28.072 "completions": 366, 00:22:28.072 "requests": 183, 00:22:28.072 "request_latency": 37902448, 00:22:28.072 "pending_free_request": 0, 00:22:28.072 "pending_rdma_read": 0, 00:22:28.072 "pending_rdma_write": 0, 00:22:28.072 "pending_rdma_send": 0, 00:22:28.072 "total_send_wrs": 310, 00:22:28.072 "send_doorbell_updates": 159, 00:22:28.072 "total_recv_wrs": 4279, 00:22:28.072 "recv_doorbell_updates": 160 00:22:28.072 }, 00:22:28.072 { 00:22:28.072 "name": "mlx5_1", 00:22:28.072 "polls": 3609938, 00:22:28.072 "idle_polls": 3609938, 00:22:28.072 "completions": 0, 00:22:28.072 "requests": 0, 00:22:28.072 "request_latency": 0, 00:22:28.072 "pending_free_request": 0, 00:22:28.072 "pending_rdma_read": 0, 00:22:28.072 "pending_rdma_write": 0, 00:22:28.072 "pending_rdma_send": 0, 00:22:28.072 "total_send_wrs": 0, 00:22:28.072 "send_doorbell_updates": 0, 00:22:28.072 "total_recv_wrs": 4096, 00:22:28.072 "recv_doorbell_updates": 1 00:22:28.072 } 00:22:28.072 ] 00:22:28.072 } 00:22:28.072 ] 00:22:28.072 }, 00:22:28.072 { 00:22:28.072 "name": "nvmf_tgt_poll_group_002", 00:22:28.072 "admin_qpairs": 1, 00:22:28.072 "io_qpairs": 26, 00:22:28.072 "current_admin_qpairs": 0, 00:22:28.072 "current_io_qpairs": 0, 00:22:28.072 "pending_bdev_io": 0, 00:22:28.072 "completed_nvme_io": 124, 00:22:28.072 "transports": [ 00:22:28.072 { 00:22:28.072 "trtype": "RDMA", 00:22:28.072 "pending_data_buffer": 0, 00:22:28.072 "devices": [ 00:22:28.072 { 00:22:28.072 "name": "mlx5_0", 00:22:28.072 "polls": 3775604, 00:22:28.072 "idle_polls": 3775335, 00:22:28.072 "completions": 307, 00:22:28.072 "requests": 153, 00:22:28.072 "request_latency": 34896438, 00:22:28.072 "pending_free_request": 0, 00:22:28.072 "pending_rdma_read": 0, 00:22:28.072 "pending_rdma_write": 0, 00:22:28.072 "pending_rdma_send": 0, 00:22:28.072 "total_send_wrs": 265, 00:22:28.072 "send_doorbell_updates": 131, 00:22:28.072 "total_recv_wrs": 4249, 00:22:28.072 "recv_doorbell_updates": 131 00:22:28.072 }, 00:22:28.072 { 00:22:28.072 "name": "mlx5_1", 00:22:28.072 "polls": 3775604, 00:22:28.072 "idle_polls": 3775604, 00:22:28.072 "completions": 0, 00:22:28.072 "requests": 0, 00:22:28.072 "request_latency": 0, 00:22:28.072 "pending_free_request": 0, 00:22:28.072 "pending_rdma_read": 0, 00:22:28.072 "pending_rdma_write": 0, 00:22:28.072 "pending_rdma_send": 0, 00:22:28.072 "total_send_wrs": 0, 00:22:28.072 "send_doorbell_updates": 0, 00:22:28.072 "total_recv_wrs": 4096, 00:22:28.072 "recv_doorbell_updates": 1 00:22:28.072 } 00:22:28.072 ] 00:22:28.072 } 00:22:28.072 ] 00:22:28.073 }, 00:22:28.073 { 00:22:28.073 "name": "nvmf_tgt_poll_group_003", 00:22:28.073 "admin_qpairs": 2, 00:22:28.073 "io_qpairs": 26, 00:22:28.073 "current_admin_qpairs": 0, 00:22:28.073 "current_io_qpairs": 0, 00:22:28.073 "pending_bdev_io": 0, 00:22:28.073 "completed_nvme_io": 126, 00:22:28.073 "transports": [ 00:22:28.073 { 00:22:28.073 "trtype": "RDMA", 00:22:28.073 "pending_data_buffer": 0, 00:22:28.073 "devices": [ 00:22:28.073 { 00:22:28.073 "name": "mlx5_0", 00:22:28.073 "polls": 2868405, 00:22:28.073 "idle_polls": 2868097, 00:22:28.073 "completions": 358, 00:22:28.073 "requests": 179, 00:22:28.073 "request_latency": 40417084, 00:22:28.073 "pending_free_request": 0, 00:22:28.073 "pending_rdma_read": 0, 00:22:28.073 "pending_rdma_write": 0, 00:22:28.073 "pending_rdma_send": 0, 00:22:28.073 "total_send_wrs": 303, 00:22:28.073 "send_doorbell_updates": 153, 00:22:28.073 "total_recv_wrs": 4275, 00:22:28.073 "recv_doorbell_updates": 154 00:22:28.073 }, 00:22:28.073 { 00:22:28.073 "name": "mlx5_1", 00:22:28.073 "polls": 2868405, 00:22:28.073 "idle_polls": 2868405, 00:22:28.073 "completions": 0, 00:22:28.073 "requests": 0, 00:22:28.073 "request_latency": 0, 00:22:28.073 "pending_free_request": 0, 00:22:28.073 "pending_rdma_read": 0, 00:22:28.073 "pending_rdma_write": 0, 00:22:28.073 "pending_rdma_send": 0, 00:22:28.073 "total_send_wrs": 0, 00:22:28.073 "send_doorbell_updates": 0, 00:22:28.073 "total_recv_wrs": 4096, 00:22:28.073 "recv_doorbell_updates": 1 00:22:28.073 } 00:22:28.073 ] 00:22:28.073 } 00:22:28.073 ] 00:22:28.073 } 00:22:28.073 ] 00:22:28.073 }' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1300 > 0 )) 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 136507942 > 0 )) 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:28.073 rmmod nvme_rdma 00:22:28.073 rmmod nvme_fabrics 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1728959 ']' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1728959 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1728959 ']' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1728959 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.073 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1728959 00:22:28.331 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.331 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.331 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1728959' 00:22:28.331 killing process with pid 1728959 00:22:28.331 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1728959 00:22:28.331 13:53:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1728959 00:22:28.589 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.589 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:28.589 00:22:28.589 real 0m36.154s 00:22:28.589 user 2m0.612s 00:22:28.589 sys 0m6.096s 00:22:28.589 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.589 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.589 ************************************ 00:22:28.589 END TEST nvmf_rpc 00:22:28.589 ************************************ 00:22:28.589 13:53:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:22:28.589 13:53:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:28.589 13:53:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.589 13:53:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:28.590 ************************************ 00:22:28.590 START TEST nvmf_invalid 00:22:28.590 ************************************ 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:22:28.590 * Looking for test storage... 00:22:28.590 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.590 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.849 --rc genhtml_branch_coverage=1 00:22:28.849 --rc genhtml_function_coverage=1 00:22:28.849 --rc genhtml_legend=1 00:22:28.849 --rc geninfo_all_blocks=1 00:22:28.849 --rc geninfo_unexecuted_blocks=1 00:22:28.849 00:22:28.849 ' 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.849 --rc genhtml_branch_coverage=1 00:22:28.849 --rc genhtml_function_coverage=1 00:22:28.849 --rc genhtml_legend=1 00:22:28.849 --rc geninfo_all_blocks=1 00:22:28.849 --rc geninfo_unexecuted_blocks=1 00:22:28.849 00:22:28.849 ' 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.849 --rc genhtml_branch_coverage=1 00:22:28.849 --rc genhtml_function_coverage=1 00:22:28.849 --rc genhtml_legend=1 00:22:28.849 --rc geninfo_all_blocks=1 00:22:28.849 --rc geninfo_unexecuted_blocks=1 00:22:28.849 00:22:28.849 ' 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:28.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.849 --rc genhtml_branch_coverage=1 00:22:28.849 --rc genhtml_function_coverage=1 00:22:28.849 --rc genhtml_legend=1 00:22:28.849 --rc geninfo_all_blocks=1 00:22:28.849 --rc geninfo_unexecuted_blocks=1 00:22:28.849 00:22:28.849 ' 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.849 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.850 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.850 13:53:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:35.415 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:35.415 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:35.415 Found net devices under 0000:18:00.0: mlx_0_0 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:35.415 Found net devices under 0000:18:00.1: mlx_0_1 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:35.415 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:35.415 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:35.415 altname enp24s0f0np0 00:22:35.415 altname ens785f0np0 00:22:35.415 inet 192.168.100.8/24 scope global mlx_0_0 00:22:35.415 valid_lft forever preferred_lft forever 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:35.415 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:35.415 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:35.415 altname enp24s0f1np1 00:22:35.415 altname ens785f1np1 00:22:35.415 inet 192.168.100.9/24 scope global mlx_0_1 00:22:35.415 valid_lft forever preferred_lft forever 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:35.415 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:35.416 192.168.100.9' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:35.416 192.168.100.9' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:35.416 192.168.100.9' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1737898 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1737898 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1737898 ']' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:35.416 [2024-12-05 13:53:34.588105] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:22:35.416 [2024-12-05 13:53:34.588157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.416 [2024-12-05 13:53:34.655359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.416 [2024-12-05 13:53:34.678466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.416 [2024-12-05 13:53:34.678505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.416 [2024-12-05 13:53:34.678512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.416 [2024-12-05 13:53:34.678517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.416 [2024-12-05 13:53:34.678521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.416 [2024-12-05 13:53:34.680114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.416 [2024-12-05 13:53:34.680223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.416 [2024-12-05 13:53:34.680351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.416 [2024-12-05 13:53:34.680353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12956 00:22:35.416 [2024-12-05 13:53:34.968561] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:22:35.416 { 00:22:35.416 "nqn": "nqn.2016-06.io.spdk:cnode12956", 00:22:35.416 "tgt_name": "foobar", 00:22:35.416 "method": "nvmf_create_subsystem", 00:22:35.416 "req_id": 1 00:22:35.416 } 00:22:35.416 Got JSON-RPC error response 00:22:35.416 response: 00:22:35.416 { 00:22:35.416 "code": -32603, 00:22:35.416 "message": "Unable to find target foobar" 00:22:35.416 }' 00:22:35.416 13:53:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:22:35.416 { 00:22:35.416 "nqn": "nqn.2016-06.io.spdk:cnode12956", 00:22:35.416 "tgt_name": "foobar", 00:22:35.416 "method": "nvmf_create_subsystem", 00:22:35.416 "req_id": 1 00:22:35.416 } 00:22:35.416 Got JSON-RPC error response 00:22:35.416 response: 00:22:35.416 { 00:22:35.416 "code": -32603, 00:22:35.416 "message": "Unable to find target foobar" 00:22:35.416 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:22:35.416 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:22:35.416 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12383 00:22:35.416 [2024-12-05 13:53:35.161177] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12383: invalid serial number 'SPDKISFASTANDAWESOME' 00:22:35.416 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:22:35.416 { 00:22:35.416 "nqn": "nqn.2016-06.io.spdk:cnode12383", 00:22:35.416 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:22:35.416 "method": "nvmf_create_subsystem", 00:22:35.416 "req_id": 1 00:22:35.416 } 00:22:35.416 Got JSON-RPC error response 00:22:35.416 response: 00:22:35.416 { 00:22:35.416 "code": -32602, 00:22:35.416 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:22:35.416 }' 00:22:35.416 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:22:35.416 { 00:22:35.416 "nqn": "nqn.2016-06.io.spdk:cnode12383", 00:22:35.416 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:22:35.416 "method": "nvmf_create_subsystem", 00:22:35.416 "req_id": 1 00:22:35.416 } 00:22:35.416 Got JSON-RPC error response 00:22:35.416 response: 00:22:35.416 { 00:22:35.416 "code": -32602, 00:22:35.416 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:22:35.416 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:22:35.416 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:22:35.416 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24003 00:22:35.729 [2024-12-05 13:53:35.349880] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24003: invalid model number 'SPDK_Controller' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:22:35.729 { 00:22:35.729 "nqn": "nqn.2016-06.io.spdk:cnode24003", 00:22:35.729 "model_number": "SPDK_Controller\u001f", 00:22:35.729 "method": "nvmf_create_subsystem", 00:22:35.729 "req_id": 1 00:22:35.729 } 00:22:35.729 Got JSON-RPC error response 00:22:35.729 response: 00:22:35.729 { 00:22:35.729 "code": -32602, 00:22:35.729 "message": "Invalid MN SPDK_Controller\u001f" 00:22:35.729 }' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:22:35.729 { 00:22:35.729 "nqn": "nqn.2016-06.io.spdk:cnode24003", 00:22:35.729 "model_number": "SPDK_Controller\u001f", 00:22:35.729 "method": "nvmf_create_subsystem", 00:22:35.729 "req_id": 1 00:22:35.729 } 00:22:35.729 Got JSON-RPC error response 00:22:35.729 response: 00:22:35.729 { 00:22:35.729 "code": -32602, 00:22:35.729 "message": "Invalid MN SPDK_Controller\u001f" 00:22:35.729 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:22:35.729 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p^(>\%]E\YV=0h~_>({m]' 00:22:35.730 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'p^(>\%]E\YV=0h~_>({m]' nqn.2016-06.io.spdk:cnode14906 00:22:35.989 [2024-12-05 13:53:35.682945] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14906: invalid serial number 'p^(>\%]E\YV=0h~_>({m]' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:22:35.989 { 00:22:35.989 "nqn": "nqn.2016-06.io.spdk:cnode14906", 00:22:35.989 "serial_number": "p^(>\\%]E\\YV=0h~_>({m]", 00:22:35.989 "method": "nvmf_create_subsystem", 00:22:35.989 "req_id": 1 00:22:35.989 } 00:22:35.989 Got JSON-RPC error response 00:22:35.989 response: 00:22:35.989 { 00:22:35.989 "code": -32602, 00:22:35.989 "message": "Invalid SN p^(>\\%]E\\YV=0h~_>({m]" 00:22:35.989 }' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:22:35.989 { 00:22:35.989 "nqn": "nqn.2016-06.io.spdk:cnode14906", 00:22:35.989 "serial_number": "p^(>\\%]E\\YV=0h~_>({m]", 00:22:35.989 "method": "nvmf_create_subsystem", 00:22:35.989 "req_id": 1 00:22:35.989 } 00:22:35.989 Got JSON-RPC error response 00:22:35.989 response: 00:22:35.989 { 00:22:35.989 "code": -32602, 00:22:35.989 "message": "Invalid SN p^(>\\%]E\\YV=0h~_>({m]" 00:22:35.989 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:22:35.989 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:35.990 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:22:36.248 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '0ynKcfXJeNw<-mr~-'\''7C_9yz+x?f@KD:?;0:='\''/.@' 00:22:36.249 13:53:35 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '0ynKcfXJeNw<-mr~-'\''7C_9yz+x?f@KD:?;0:='\''/.@' nqn.2016-06.io.spdk:cnode7880 00:22:36.507 [2024-12-05 13:53:36.136436] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7880: invalid model number '0ynKcfXJeNw<-mr~-'7C_9yz+x?f@KD:?;0:='/.@' 00:22:36.507 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:22:36.507 { 00:22:36.507 "nqn": "nqn.2016-06.io.spdk:cnode7880", 00:22:36.507 "model_number": "0ynKcfXJeNw<-mr~-'\''7C_9yz+x?f@KD:?;0:='\''/.@", 00:22:36.507 "method": "nvmf_create_subsystem", 00:22:36.507 "req_id": 1 00:22:36.507 } 00:22:36.507 Got JSON-RPC error response 00:22:36.507 response: 00:22:36.507 { 00:22:36.507 "code": -32602, 00:22:36.507 "message": "Invalid MN 0ynKcfXJeNw<-mr~-'\''7C_9yz+x?f@KD:?;0:='\''/.@" 00:22:36.507 }' 00:22:36.507 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:22:36.507 { 00:22:36.507 "nqn": "nqn.2016-06.io.spdk:cnode7880", 00:22:36.507 "model_number": "0ynKcfXJeNw<-mr~-'7C_9yz+x?f@KD:?;0:='/.@", 00:22:36.507 "method": "nvmf_create_subsystem", 00:22:36.507 "req_id": 1 00:22:36.507 } 00:22:36.507 Got JSON-RPC error response 00:22:36.507 response: 00:22:36.507 { 00:22:36.507 "code": -32602, 00:22:36.507 "message": "Invalid MN 0ynKcfXJeNw<-mr~-'7C_9yz+x?f@KD:?;0:='/.@" 00:22:36.507 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:22:36.507 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:22:36.507 [2024-12-05 13:53:36.340852] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fbe950/0x1fc2e40) succeed. 00:22:36.507 [2024-12-05 13:53:36.349211] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fbffe0/0x20044e0) succeed. 00:22:36.766 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:22:37.025 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:22:37.025 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:22:37.025 192.168.100.9' 00:22:37.025 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:22:37.025 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:22:37.025 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:22:37.025 [2024-12-05 13:53:36.834767] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:22:37.025 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:22:37.025 { 00:22:37.025 "nqn": "nqn.2016-06.io.spdk:cnode", 00:22:37.025 "listen_address": { 00:22:37.025 "trtype": "rdma", 00:22:37.025 "traddr": "192.168.100.8", 00:22:37.025 "trsvcid": "4421" 00:22:37.025 }, 00:22:37.025 "method": "nvmf_subsystem_remove_listener", 00:22:37.025 "req_id": 1 00:22:37.025 } 00:22:37.025 Got JSON-RPC error response 00:22:37.025 response: 00:22:37.025 { 00:22:37.025 "code": -32602, 00:22:37.025 "message": "Invalid parameters" 00:22:37.025 }' 00:22:37.025 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:22:37.025 { 00:22:37.025 "nqn": "nqn.2016-06.io.spdk:cnode", 00:22:37.025 "listen_address": { 00:22:37.025 "trtype": "rdma", 00:22:37.025 "traddr": "192.168.100.8", 00:22:37.025 "trsvcid": "4421" 00:22:37.025 }, 00:22:37.025 "method": "nvmf_subsystem_remove_listener", 00:22:37.025 "req_id": 1 00:22:37.025 } 00:22:37.025 Got JSON-RPC error response 00:22:37.025 response: 00:22:37.025 { 00:22:37.025 "code": -32602, 00:22:37.025 "message": "Invalid parameters" 00:22:37.025 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:22:37.025 13:53:36 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15260 -i 0 00:22:37.284 [2024-12-05 13:53:37.019370] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15260: invalid cntlid range [0-65519] 00:22:37.284 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:22:37.284 { 00:22:37.284 "nqn": "nqn.2016-06.io.spdk:cnode15260", 00:22:37.284 "min_cntlid": 0, 00:22:37.284 "method": "nvmf_create_subsystem", 00:22:37.284 "req_id": 1 00:22:37.284 } 00:22:37.284 Got JSON-RPC error response 00:22:37.284 response: 00:22:37.284 { 00:22:37.284 "code": -32602, 00:22:37.284 "message": "Invalid cntlid range [0-65519]" 00:22:37.284 }' 00:22:37.284 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:22:37.284 { 00:22:37.284 "nqn": "nqn.2016-06.io.spdk:cnode15260", 00:22:37.284 "min_cntlid": 0, 00:22:37.284 "method": "nvmf_create_subsystem", 00:22:37.284 "req_id": 1 00:22:37.284 } 00:22:37.284 Got JSON-RPC error response 00:22:37.284 response: 00:22:37.284 { 00:22:37.284 "code": -32602, 00:22:37.284 "message": "Invalid cntlid range [0-65519]" 00:22:37.284 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:37.284 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18213 -i 65520 00:22:37.542 [2024-12-05 13:53:37.216045] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18213: invalid cntlid range [65520-65519] 00:22:37.542 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:22:37.542 { 00:22:37.542 "nqn": "nqn.2016-06.io.spdk:cnode18213", 00:22:37.542 "min_cntlid": 65520, 00:22:37.542 "method": "nvmf_create_subsystem", 00:22:37.542 "req_id": 1 00:22:37.542 } 00:22:37.542 Got JSON-RPC error response 00:22:37.542 response: 00:22:37.542 { 00:22:37.542 "code": -32602, 00:22:37.542 "message": "Invalid cntlid range [65520-65519]" 00:22:37.542 }' 00:22:37.542 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:22:37.542 { 00:22:37.542 "nqn": "nqn.2016-06.io.spdk:cnode18213", 00:22:37.542 "min_cntlid": 65520, 00:22:37.542 "method": "nvmf_create_subsystem", 00:22:37.542 "req_id": 1 00:22:37.542 } 00:22:37.542 Got JSON-RPC error response 00:22:37.542 response: 00:22:37.542 { 00:22:37.542 "code": -32602, 00:22:37.542 "message": "Invalid cntlid range [65520-65519]" 00:22:37.542 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:37.542 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23201 -I 0 00:22:37.800 [2024-12-05 13:53:37.412748] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23201: invalid cntlid range [1-0] 00:22:37.800 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:22:37.800 { 00:22:37.800 "nqn": "nqn.2016-06.io.spdk:cnode23201", 00:22:37.800 "max_cntlid": 0, 00:22:37.800 "method": "nvmf_create_subsystem", 00:22:37.800 "req_id": 1 00:22:37.800 } 00:22:37.800 Got JSON-RPC error response 00:22:37.800 response: 00:22:37.800 { 00:22:37.800 "code": -32602, 00:22:37.800 "message": "Invalid cntlid range [1-0]" 00:22:37.801 }' 00:22:37.801 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:22:37.801 { 00:22:37.801 "nqn": "nqn.2016-06.io.spdk:cnode23201", 00:22:37.801 "max_cntlid": 0, 00:22:37.801 "method": "nvmf_create_subsystem", 00:22:37.801 "req_id": 1 00:22:37.801 } 00:22:37.801 Got JSON-RPC error response 00:22:37.801 response: 00:22:37.801 { 00:22:37.801 "code": -32602, 00:22:37.801 "message": "Invalid cntlid range [1-0]" 00:22:37.801 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:37.801 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20407 -I 65520 00:22:37.801 [2024-12-05 13:53:37.601440] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20407: invalid cntlid range [1-65520] 00:22:37.801 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:22:37.801 { 00:22:37.801 "nqn": "nqn.2016-06.io.spdk:cnode20407", 00:22:37.801 "max_cntlid": 65520, 00:22:37.801 "method": "nvmf_create_subsystem", 00:22:37.801 "req_id": 1 00:22:37.801 } 00:22:37.801 Got JSON-RPC error response 00:22:37.801 response: 00:22:37.801 { 00:22:37.801 "code": -32602, 00:22:37.801 "message": "Invalid cntlid range [1-65520]" 00:22:37.801 }' 00:22:37.801 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:22:37.801 { 00:22:37.801 "nqn": "nqn.2016-06.io.spdk:cnode20407", 00:22:37.801 "max_cntlid": 65520, 00:22:37.801 "method": "nvmf_create_subsystem", 00:22:37.801 "req_id": 1 00:22:37.801 } 00:22:37.801 Got JSON-RPC error response 00:22:37.801 response: 00:22:37.801 { 00:22:37.801 "code": -32602, 00:22:37.801 "message": "Invalid cntlid range [1-65520]" 00:22:37.801 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:37.801 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16272 -i 6 -I 5 00:22:38.059 [2024-12-05 13:53:37.790164] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16272: invalid cntlid range [6-5] 00:22:38.059 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:22:38.059 { 00:22:38.059 "nqn": "nqn.2016-06.io.spdk:cnode16272", 00:22:38.059 "min_cntlid": 6, 00:22:38.059 "max_cntlid": 5, 00:22:38.059 "method": "nvmf_create_subsystem", 00:22:38.059 "req_id": 1 00:22:38.059 } 00:22:38.059 Got JSON-RPC error response 00:22:38.059 response: 00:22:38.059 { 00:22:38.059 "code": -32602, 00:22:38.059 "message": "Invalid cntlid range [6-5]" 00:22:38.059 }' 00:22:38.059 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:22:38.059 { 00:22:38.059 "nqn": "nqn.2016-06.io.spdk:cnode16272", 00:22:38.059 "min_cntlid": 6, 00:22:38.059 "max_cntlid": 5, 00:22:38.059 "method": "nvmf_create_subsystem", 00:22:38.059 "req_id": 1 00:22:38.059 } 00:22:38.059 Got JSON-RPC error response 00:22:38.059 response: 00:22:38.059 { 00:22:38.059 "code": -32602, 00:22:38.059 "message": "Invalid cntlid range [6-5]" 00:22:38.059 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:22:38.059 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:22:38.318 { 00:22:38.318 "name": "foobar", 00:22:38.318 "method": "nvmf_delete_target", 00:22:38.318 "req_id": 1 00:22:38.318 } 00:22:38.318 Got JSON-RPC error response 00:22:38.318 response: 00:22:38.318 { 00:22:38.318 "code": -32602, 00:22:38.318 "message": "The specified target doesn'\''t exist, cannot delete it." 00:22:38.318 }' 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:22:38.318 { 00:22:38.318 "name": "foobar", 00:22:38.318 "method": "nvmf_delete_target", 00:22:38.318 "req_id": 1 00:22:38.318 } 00:22:38.318 Got JSON-RPC error response 00:22:38.318 response: 00:22:38.318 { 00:22:38.318 "code": -32602, 00:22:38.318 "message": "The specified target doesn't exist, cannot delete it." 00:22:38.318 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:38.318 rmmod nvme_rdma 00:22:38.318 rmmod nvme_fabrics 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1737898 ']' 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1737898 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1737898 ']' 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1737898 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.318 13:53:37 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737898 00:22:38.318 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:38.318 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:38.318 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737898' 00:22:38.318 killing process with pid 1737898 00:22:38.318 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1737898 00:22:38.318 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1737898 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:38.577 00:22:38.577 real 0m9.989s 00:22:38.577 user 0m18.244s 00:22:38.577 sys 0m5.500s 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:22:38.577 ************************************ 00:22:38.577 END TEST nvmf_invalid 00:22:38.577 ************************************ 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:38.577 ************************************ 00:22:38.577 START TEST nvmf_connect_stress 00:22:38.577 ************************************ 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:22:38.577 * Looking for test storage... 00:22:38.577 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:38.577 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:38.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.836 --rc genhtml_branch_coverage=1 00:22:38.836 --rc genhtml_function_coverage=1 00:22:38.836 --rc genhtml_legend=1 00:22:38.836 --rc geninfo_all_blocks=1 00:22:38.836 --rc geninfo_unexecuted_blocks=1 00:22:38.836 00:22:38.836 ' 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:38.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.836 --rc genhtml_branch_coverage=1 00:22:38.836 --rc genhtml_function_coverage=1 00:22:38.836 --rc genhtml_legend=1 00:22:38.836 --rc geninfo_all_blocks=1 00:22:38.836 --rc geninfo_unexecuted_blocks=1 00:22:38.836 00:22:38.836 ' 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:38.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.836 --rc genhtml_branch_coverage=1 00:22:38.836 --rc genhtml_function_coverage=1 00:22:38.836 --rc genhtml_legend=1 00:22:38.836 --rc geninfo_all_blocks=1 00:22:38.836 --rc geninfo_unexecuted_blocks=1 00:22:38.836 00:22:38.836 ' 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:38.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.836 --rc genhtml_branch_coverage=1 00:22:38.836 --rc genhtml_function_coverage=1 00:22:38.836 --rc genhtml_legend=1 00:22:38.836 --rc geninfo_all_blocks=1 00:22:38.836 --rc geninfo_unexecuted_blocks=1 00:22:38.836 00:22:38.836 ' 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.836 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.837 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.837 13:53:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:45.438 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:45.438 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:45.438 Found net devices under 0000:18:00.0: mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:45.438 Found net devices under 0000:18:00.1: mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:45.438 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:45.438 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:45.438 altname enp24s0f0np0 00:22:45.438 altname ens785f0np0 00:22:45.438 inet 192.168.100.8/24 scope global mlx_0_0 00:22:45.438 valid_lft forever preferred_lft forever 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:45.438 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:45.438 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:45.438 altname enp24s0f1np1 00:22:45.438 altname ens785f1np1 00:22:45.438 inet 192.168.100.9/24 scope global mlx_0_1 00:22:45.438 valid_lft forever preferred_lft forever 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:45.438 192.168.100.9' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:45.438 192.168.100.9' 00:22:45.438 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:45.439 192.168.100.9' 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1741864 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1741864 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1741864 ']' 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 [2024-12-05 13:53:44.632225] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:22:45.439 [2024-12-05 13:53:44.632268] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.439 [2024-12-05 13:53:44.704247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.439 [2024-12-05 13:53:44.725033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.439 [2024-12-05 13:53:44.725071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.439 [2024-12-05 13:53:44.725077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.439 [2024-12-05 13:53:44.725084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.439 [2024-12-05 13:53:44.725088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.439 [2024-12-05 13:53:44.726363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.439 [2024-12-05 13:53:44.726467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.439 [2024-12-05 13:53:44.726468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 [2024-12-05 13:53:44.874817] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14156c0/0x1419bb0) succeed. 00:22:45.439 [2024-12-05 13:53:44.882986] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1416cb0/0x145b250) succeed. 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.439 13:53:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 [2024-12-05 13:53:44.998760] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.439 NULL1 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1742105 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.439 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.696 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.696 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:45.696 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:45.696 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.696 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:45.954 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.954 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:45.954 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:45.955 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.955 13:53:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:46.521 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.521 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:46.521 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:46.521 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.521 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:46.780 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.780 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:46.780 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:46.780 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.780 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:47.038 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.038 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:47.038 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:47.038 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.038 13:53:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:47.296 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.296 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:47.296 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:47.296 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.296 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:47.555 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.555 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:47.555 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:47.555 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.555 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:48.123 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.123 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:48.123 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:48.123 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.123 13:53:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:48.381 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.381 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:48.381 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:48.381 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.381 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:48.639 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.639 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:48.639 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:48.639 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.639 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:48.897 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.897 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:48.897 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:48.897 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.897 13:53:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:49.477 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.477 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:49.477 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:49.477 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.477 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:49.736 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.736 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:49.736 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:49.736 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.736 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:49.995 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.995 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:49.995 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:49.995 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.995 13:53:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:50.254 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.254 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:50.254 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:50.254 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.254 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:50.513 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.513 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:50.513 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:50.513 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.513 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:51.081 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.081 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:51.081 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:51.081 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.081 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:51.340 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.340 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:51.340 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:51.340 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.340 13:53:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:51.598 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.598 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:51.598 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:51.598 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.598 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:51.856 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.856 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:51.856 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:51.856 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.856 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:52.113 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.113 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:52.113 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:52.113 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.113 13:53:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:52.677 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.677 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:52.677 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:52.677 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.677 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:52.934 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.934 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:52.934 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:52.934 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.934 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:53.192 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.192 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:53.192 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:53.192 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.192 13:53:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:53.450 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.450 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:53.450 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:53.450 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.450 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:54.018 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.018 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:54.018 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:54.018 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.018 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:54.276 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.276 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:54.276 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:54.276 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.276 13:53:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:54.534 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.534 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:54.534 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:54.534 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.534 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:54.793 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.793 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:54.793 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:54.793 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.793 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:55.364 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.364 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:55.364 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:55.364 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.364 13:53:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:55.623 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.623 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:55.623 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:55.623 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.623 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:55.623 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1742105 00:22:55.882 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1742105) - No such process 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1742105 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:55.882 rmmod nvme_rdma 00:22:55.882 rmmod nvme_fabrics 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1741864 ']' 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1741864 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1741864 ']' 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1741864 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1741864 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1741864' 00:22:55.882 killing process with pid 1741864 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1741864 00:22:55.882 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1741864 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:56.141 00:22:56.141 real 0m17.550s 00:22:56.141 user 0m41.075s 00:22:56.141 sys 0m6.486s 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:22:56.141 ************************************ 00:22:56.141 END TEST nvmf_connect_stress 00:22:56.141 ************************************ 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:56.141 ************************************ 00:22:56.141 START TEST nvmf_fused_ordering 00:22:56.141 ************************************ 00:22:56.141 13:53:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:22:56.401 * Looking for test storage... 00:22:56.401 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:56.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.401 --rc genhtml_branch_coverage=1 00:22:56.401 --rc genhtml_function_coverage=1 00:22:56.401 --rc genhtml_legend=1 00:22:56.401 --rc geninfo_all_blocks=1 00:22:56.401 --rc geninfo_unexecuted_blocks=1 00:22:56.401 00:22:56.401 ' 00:22:56.401 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:56.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.401 --rc genhtml_branch_coverage=1 00:22:56.401 --rc genhtml_function_coverage=1 00:22:56.401 --rc genhtml_legend=1 00:22:56.402 --rc geninfo_all_blocks=1 00:22:56.402 --rc geninfo_unexecuted_blocks=1 00:22:56.402 00:22:56.402 ' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:56.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.402 --rc genhtml_branch_coverage=1 00:22:56.402 --rc genhtml_function_coverage=1 00:22:56.402 --rc genhtml_legend=1 00:22:56.402 --rc geninfo_all_blocks=1 00:22:56.402 --rc geninfo_unexecuted_blocks=1 00:22:56.402 00:22:56.402 ' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:56.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.402 --rc genhtml_branch_coverage=1 00:22:56.402 --rc genhtml_function_coverage=1 00:22:56.402 --rc genhtml_legend=1 00:22:56.402 --rc geninfo_all_blocks=1 00:22:56.402 --rc geninfo_unexecuted_blocks=1 00:22:56.402 00:22:56.402 ' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:56.402 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.402 13:53:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.966 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:02.967 13:54:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:02.967 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:02.967 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:02.967 Found net devices under 0000:18:00.0: mlx_0_0 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:02.967 Found net devices under 0000:18:00.1: mlx_0_1 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:02.967 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:02.967 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:02.967 altname enp24s0f0np0 00:23:02.967 altname ens785f0np0 00:23:02.967 inet 192.168.100.8/24 scope global mlx_0_0 00:23:02.967 valid_lft forever preferred_lft forever 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:02.967 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:02.967 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:02.968 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:02.968 altname enp24s0f1np1 00:23:02.968 altname ens785f1np1 00:23:02.968 inet 192.168.100.9/24 scope global mlx_0_1 00:23:02.968 valid_lft forever preferred_lft forever 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:02.968 192.168.100.9' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:02.968 192.168.100.9' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:02.968 192.168.100.9' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1747375 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1747375 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1747375 ']' 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.968 [2024-12-05 13:54:02.256518] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:23:02.968 [2024-12-05 13:54:02.256562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.968 [2024-12-05 13:54:02.330335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.968 [2024-12-05 13:54:02.350392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.968 [2024-12-05 13:54:02.350424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.968 [2024-12-05 13:54:02.350431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.968 [2024-12-05 13:54:02.350436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.968 [2024-12-05 13:54:02.350441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.968 [2024-12-05 13:54:02.350815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.968 [2024-12-05 13:54:02.506152] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf76000/0xf7a4f0) succeed. 00:23:02.968 [2024-12-05 13:54:02.515325] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf774b0/0xfbbb90) succeed. 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.968 [2024-12-05 13:54:02.560868] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.968 NULL1 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:23:02.968 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.969 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.969 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.969 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:23:02.969 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.969 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:02.969 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.969 13:54:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:02.969 [2024-12-05 13:54:02.619332] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:23:02.969 [2024-12-05 13:54:02.619384] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747466 ] 00:23:02.969 Attached to nqn.2016-06.io.spdk:cnode1 00:23:02.969 Namespace ID: 1 size: 1GB 00:23:02.969 fused_ordering(0) 00:23:02.969 fused_ordering(1) 00:23:02.969 fused_ordering(2) 00:23:02.969 fused_ordering(3) 00:23:02.969 fused_ordering(4) 00:23:02.969 fused_ordering(5) 00:23:02.969 fused_ordering(6) 00:23:02.969 fused_ordering(7) 00:23:02.969 fused_ordering(8) 00:23:02.969 fused_ordering(9) 00:23:02.969 fused_ordering(10) 00:23:02.969 fused_ordering(11) 00:23:02.969 fused_ordering(12) 00:23:02.969 fused_ordering(13) 00:23:02.969 fused_ordering(14) 00:23:02.969 fused_ordering(15) 00:23:02.969 fused_ordering(16) 00:23:02.969 fused_ordering(17) 00:23:02.969 fused_ordering(18) 00:23:02.969 fused_ordering(19) 00:23:02.969 fused_ordering(20) 00:23:02.969 fused_ordering(21) 00:23:02.969 fused_ordering(22) 00:23:02.969 fused_ordering(23) 00:23:02.969 fused_ordering(24) 00:23:02.969 fused_ordering(25) 00:23:02.969 fused_ordering(26) 00:23:02.969 fused_ordering(27) 00:23:02.969 fused_ordering(28) 00:23:02.969 fused_ordering(29) 00:23:02.969 fused_ordering(30) 00:23:02.969 fused_ordering(31) 00:23:02.969 fused_ordering(32) 00:23:02.969 fused_ordering(33) 00:23:02.969 fused_ordering(34) 00:23:02.969 fused_ordering(35) 00:23:02.969 fused_ordering(36) 00:23:02.969 fused_ordering(37) 00:23:02.969 fused_ordering(38) 00:23:02.969 fused_ordering(39) 00:23:02.969 fused_ordering(40) 00:23:02.969 fused_ordering(41) 00:23:02.969 fused_ordering(42) 00:23:02.969 fused_ordering(43) 00:23:02.969 fused_ordering(44) 00:23:02.969 fused_ordering(45) 00:23:02.969 fused_ordering(46) 00:23:02.969 fused_ordering(47) 00:23:02.969 fused_ordering(48) 00:23:02.969 fused_ordering(49) 00:23:02.969 fused_ordering(50) 00:23:02.969 fused_ordering(51) 00:23:02.969 fused_ordering(52) 00:23:02.969 fused_ordering(53) 00:23:02.969 fused_ordering(54) 00:23:02.969 fused_ordering(55) 00:23:02.969 fused_ordering(56) 00:23:02.969 fused_ordering(57) 00:23:02.969 fused_ordering(58) 00:23:02.969 fused_ordering(59) 00:23:02.969 fused_ordering(60) 00:23:02.969 fused_ordering(61) 00:23:02.969 fused_ordering(62) 00:23:02.969 fused_ordering(63) 00:23:02.969 fused_ordering(64) 00:23:02.969 fused_ordering(65) 00:23:02.969 fused_ordering(66) 00:23:02.969 fused_ordering(67) 00:23:02.969 fused_ordering(68) 00:23:02.969 fused_ordering(69) 00:23:02.969 fused_ordering(70) 00:23:02.969 fused_ordering(71) 00:23:02.969 fused_ordering(72) 00:23:02.969 fused_ordering(73) 00:23:02.969 fused_ordering(74) 00:23:02.969 fused_ordering(75) 00:23:02.969 fused_ordering(76) 00:23:02.969 fused_ordering(77) 00:23:02.969 fused_ordering(78) 00:23:02.969 fused_ordering(79) 00:23:02.969 fused_ordering(80) 00:23:02.969 fused_ordering(81) 00:23:02.969 fused_ordering(82) 00:23:02.969 fused_ordering(83) 00:23:02.969 fused_ordering(84) 00:23:02.969 fused_ordering(85) 00:23:02.969 fused_ordering(86) 00:23:02.969 fused_ordering(87) 00:23:02.969 fused_ordering(88) 00:23:02.969 fused_ordering(89) 00:23:02.969 fused_ordering(90) 00:23:02.969 fused_ordering(91) 00:23:02.969 fused_ordering(92) 00:23:02.969 fused_ordering(93) 00:23:02.969 fused_ordering(94) 00:23:02.969 fused_ordering(95) 00:23:02.969 fused_ordering(96) 00:23:02.969 fused_ordering(97) 00:23:02.969 fused_ordering(98) 00:23:02.969 fused_ordering(99) 00:23:02.969 fused_ordering(100) 00:23:02.969 fused_ordering(101) 00:23:02.969 fused_ordering(102) 00:23:02.969 fused_ordering(103) 00:23:02.969 fused_ordering(104) 00:23:02.969 fused_ordering(105) 00:23:02.969 fused_ordering(106) 00:23:02.969 fused_ordering(107) 00:23:02.969 fused_ordering(108) 00:23:02.969 fused_ordering(109) 00:23:02.969 fused_ordering(110) 00:23:02.969 fused_ordering(111) 00:23:02.969 fused_ordering(112) 00:23:02.969 fused_ordering(113) 00:23:02.969 fused_ordering(114) 00:23:02.969 fused_ordering(115) 00:23:02.969 fused_ordering(116) 00:23:02.969 fused_ordering(117) 00:23:02.969 fused_ordering(118) 00:23:02.969 fused_ordering(119) 00:23:02.969 fused_ordering(120) 00:23:02.969 fused_ordering(121) 00:23:02.969 fused_ordering(122) 00:23:02.969 fused_ordering(123) 00:23:02.969 fused_ordering(124) 00:23:02.969 fused_ordering(125) 00:23:02.969 fused_ordering(126) 00:23:02.969 fused_ordering(127) 00:23:02.969 fused_ordering(128) 00:23:02.969 fused_ordering(129) 00:23:02.969 fused_ordering(130) 00:23:02.969 fused_ordering(131) 00:23:02.969 fused_ordering(132) 00:23:02.969 fused_ordering(133) 00:23:02.969 fused_ordering(134) 00:23:02.969 fused_ordering(135) 00:23:02.969 fused_ordering(136) 00:23:02.969 fused_ordering(137) 00:23:02.969 fused_ordering(138) 00:23:02.969 fused_ordering(139) 00:23:02.969 fused_ordering(140) 00:23:02.969 fused_ordering(141) 00:23:02.969 fused_ordering(142) 00:23:02.969 fused_ordering(143) 00:23:02.969 fused_ordering(144) 00:23:02.969 fused_ordering(145) 00:23:02.969 fused_ordering(146) 00:23:02.969 fused_ordering(147) 00:23:02.969 fused_ordering(148) 00:23:02.969 fused_ordering(149) 00:23:02.969 fused_ordering(150) 00:23:02.969 fused_ordering(151) 00:23:02.969 fused_ordering(152) 00:23:02.969 fused_ordering(153) 00:23:02.969 fused_ordering(154) 00:23:02.969 fused_ordering(155) 00:23:02.969 fused_ordering(156) 00:23:02.969 fused_ordering(157) 00:23:02.969 fused_ordering(158) 00:23:02.969 fused_ordering(159) 00:23:02.969 fused_ordering(160) 00:23:02.969 fused_ordering(161) 00:23:02.969 fused_ordering(162) 00:23:02.969 fused_ordering(163) 00:23:02.969 fused_ordering(164) 00:23:02.969 fused_ordering(165) 00:23:02.969 fused_ordering(166) 00:23:02.969 fused_ordering(167) 00:23:02.969 fused_ordering(168) 00:23:02.969 fused_ordering(169) 00:23:02.969 fused_ordering(170) 00:23:02.969 fused_ordering(171) 00:23:02.969 fused_ordering(172) 00:23:02.969 fused_ordering(173) 00:23:02.969 fused_ordering(174) 00:23:02.969 fused_ordering(175) 00:23:02.969 fused_ordering(176) 00:23:02.969 fused_ordering(177) 00:23:02.969 fused_ordering(178) 00:23:02.969 fused_ordering(179) 00:23:02.969 fused_ordering(180) 00:23:02.969 fused_ordering(181) 00:23:02.969 fused_ordering(182) 00:23:02.969 fused_ordering(183) 00:23:02.969 fused_ordering(184) 00:23:02.969 fused_ordering(185) 00:23:02.969 fused_ordering(186) 00:23:02.969 fused_ordering(187) 00:23:02.969 fused_ordering(188) 00:23:02.969 fused_ordering(189) 00:23:02.969 fused_ordering(190) 00:23:02.969 fused_ordering(191) 00:23:02.969 fused_ordering(192) 00:23:02.969 fused_ordering(193) 00:23:02.969 fused_ordering(194) 00:23:02.969 fused_ordering(195) 00:23:02.969 fused_ordering(196) 00:23:02.969 fused_ordering(197) 00:23:02.969 fused_ordering(198) 00:23:02.969 fused_ordering(199) 00:23:02.969 fused_ordering(200) 00:23:02.969 fused_ordering(201) 00:23:02.969 fused_ordering(202) 00:23:02.969 fused_ordering(203) 00:23:02.969 fused_ordering(204) 00:23:02.969 fused_ordering(205) 00:23:03.228 fused_ordering(206) 00:23:03.228 fused_ordering(207) 00:23:03.228 fused_ordering(208) 00:23:03.228 fused_ordering(209) 00:23:03.228 fused_ordering(210) 00:23:03.228 fused_ordering(211) 00:23:03.228 fused_ordering(212) 00:23:03.228 fused_ordering(213) 00:23:03.228 fused_ordering(214) 00:23:03.228 fused_ordering(215) 00:23:03.228 fused_ordering(216) 00:23:03.228 fused_ordering(217) 00:23:03.228 fused_ordering(218) 00:23:03.228 fused_ordering(219) 00:23:03.228 fused_ordering(220) 00:23:03.228 fused_ordering(221) 00:23:03.228 fused_ordering(222) 00:23:03.228 fused_ordering(223) 00:23:03.228 fused_ordering(224) 00:23:03.228 fused_ordering(225) 00:23:03.228 fused_ordering(226) 00:23:03.228 fused_ordering(227) 00:23:03.228 fused_ordering(228) 00:23:03.229 fused_ordering(229) 00:23:03.229 fused_ordering(230) 00:23:03.229 fused_ordering(231) 00:23:03.229 fused_ordering(232) 00:23:03.229 fused_ordering(233) 00:23:03.229 fused_ordering(234) 00:23:03.229 fused_ordering(235) 00:23:03.229 fused_ordering(236) 00:23:03.229 fused_ordering(237) 00:23:03.229 fused_ordering(238) 00:23:03.229 fused_ordering(239) 00:23:03.229 fused_ordering(240) 00:23:03.229 fused_ordering(241) 00:23:03.229 fused_ordering(242) 00:23:03.229 fused_ordering(243) 00:23:03.229 fused_ordering(244) 00:23:03.229 fused_ordering(245) 00:23:03.229 fused_ordering(246) 00:23:03.229 fused_ordering(247) 00:23:03.229 fused_ordering(248) 00:23:03.229 fused_ordering(249) 00:23:03.229 fused_ordering(250) 00:23:03.229 fused_ordering(251) 00:23:03.229 fused_ordering(252) 00:23:03.229 fused_ordering(253) 00:23:03.229 fused_ordering(254) 00:23:03.229 fused_ordering(255) 00:23:03.229 fused_ordering(256) 00:23:03.229 fused_ordering(257) 00:23:03.229 fused_ordering(258) 00:23:03.229 fused_ordering(259) 00:23:03.229 fused_ordering(260) 00:23:03.229 fused_ordering(261) 00:23:03.229 fused_ordering(262) 00:23:03.229 fused_ordering(263) 00:23:03.229 fused_ordering(264) 00:23:03.229 fused_ordering(265) 00:23:03.229 fused_ordering(266) 00:23:03.229 fused_ordering(267) 00:23:03.229 fused_ordering(268) 00:23:03.229 fused_ordering(269) 00:23:03.229 fused_ordering(270) 00:23:03.229 fused_ordering(271) 00:23:03.229 fused_ordering(272) 00:23:03.229 fused_ordering(273) 00:23:03.229 fused_ordering(274) 00:23:03.229 fused_ordering(275) 00:23:03.229 fused_ordering(276) 00:23:03.229 fused_ordering(277) 00:23:03.229 fused_ordering(278) 00:23:03.229 fused_ordering(279) 00:23:03.229 fused_ordering(280) 00:23:03.229 fused_ordering(281) 00:23:03.229 fused_ordering(282) 00:23:03.229 fused_ordering(283) 00:23:03.229 fused_ordering(284) 00:23:03.229 fused_ordering(285) 00:23:03.229 fused_ordering(286) 00:23:03.229 fused_ordering(287) 00:23:03.229 fused_ordering(288) 00:23:03.229 fused_ordering(289) 00:23:03.229 fused_ordering(290) 00:23:03.229 fused_ordering(291) 00:23:03.229 fused_ordering(292) 00:23:03.229 fused_ordering(293) 00:23:03.229 fused_ordering(294) 00:23:03.229 fused_ordering(295) 00:23:03.229 fused_ordering(296) 00:23:03.229 fused_ordering(297) 00:23:03.229 fused_ordering(298) 00:23:03.229 fused_ordering(299) 00:23:03.229 fused_ordering(300) 00:23:03.229 fused_ordering(301) 00:23:03.229 fused_ordering(302) 00:23:03.229 fused_ordering(303) 00:23:03.229 fused_ordering(304) 00:23:03.229 fused_ordering(305) 00:23:03.229 fused_ordering(306) 00:23:03.229 fused_ordering(307) 00:23:03.229 fused_ordering(308) 00:23:03.229 fused_ordering(309) 00:23:03.229 fused_ordering(310) 00:23:03.229 fused_ordering(311) 00:23:03.229 fused_ordering(312) 00:23:03.229 fused_ordering(313) 00:23:03.229 fused_ordering(314) 00:23:03.229 fused_ordering(315) 00:23:03.229 fused_ordering(316) 00:23:03.229 fused_ordering(317) 00:23:03.229 fused_ordering(318) 00:23:03.229 fused_ordering(319) 00:23:03.229 fused_ordering(320) 00:23:03.229 fused_ordering(321) 00:23:03.229 fused_ordering(322) 00:23:03.229 fused_ordering(323) 00:23:03.229 fused_ordering(324) 00:23:03.229 fused_ordering(325) 00:23:03.229 fused_ordering(326) 00:23:03.229 fused_ordering(327) 00:23:03.229 fused_ordering(328) 00:23:03.229 fused_ordering(329) 00:23:03.229 fused_ordering(330) 00:23:03.229 fused_ordering(331) 00:23:03.229 fused_ordering(332) 00:23:03.229 fused_ordering(333) 00:23:03.229 fused_ordering(334) 00:23:03.229 fused_ordering(335) 00:23:03.229 fused_ordering(336) 00:23:03.229 fused_ordering(337) 00:23:03.229 fused_ordering(338) 00:23:03.229 fused_ordering(339) 00:23:03.229 fused_ordering(340) 00:23:03.229 fused_ordering(341) 00:23:03.229 fused_ordering(342) 00:23:03.229 fused_ordering(343) 00:23:03.229 fused_ordering(344) 00:23:03.229 fused_ordering(345) 00:23:03.229 fused_ordering(346) 00:23:03.229 fused_ordering(347) 00:23:03.229 fused_ordering(348) 00:23:03.229 fused_ordering(349) 00:23:03.229 fused_ordering(350) 00:23:03.229 fused_ordering(351) 00:23:03.229 fused_ordering(352) 00:23:03.229 fused_ordering(353) 00:23:03.229 fused_ordering(354) 00:23:03.229 fused_ordering(355) 00:23:03.229 fused_ordering(356) 00:23:03.229 fused_ordering(357) 00:23:03.229 fused_ordering(358) 00:23:03.229 fused_ordering(359) 00:23:03.229 fused_ordering(360) 00:23:03.229 fused_ordering(361) 00:23:03.229 fused_ordering(362) 00:23:03.229 fused_ordering(363) 00:23:03.229 fused_ordering(364) 00:23:03.229 fused_ordering(365) 00:23:03.229 fused_ordering(366) 00:23:03.229 fused_ordering(367) 00:23:03.229 fused_ordering(368) 00:23:03.229 fused_ordering(369) 00:23:03.229 fused_ordering(370) 00:23:03.229 fused_ordering(371) 00:23:03.229 fused_ordering(372) 00:23:03.229 fused_ordering(373) 00:23:03.229 fused_ordering(374) 00:23:03.229 fused_ordering(375) 00:23:03.229 fused_ordering(376) 00:23:03.229 fused_ordering(377) 00:23:03.229 fused_ordering(378) 00:23:03.229 fused_ordering(379) 00:23:03.229 fused_ordering(380) 00:23:03.229 fused_ordering(381) 00:23:03.229 fused_ordering(382) 00:23:03.229 fused_ordering(383) 00:23:03.229 fused_ordering(384) 00:23:03.229 fused_ordering(385) 00:23:03.229 fused_ordering(386) 00:23:03.229 fused_ordering(387) 00:23:03.229 fused_ordering(388) 00:23:03.229 fused_ordering(389) 00:23:03.229 fused_ordering(390) 00:23:03.229 fused_ordering(391) 00:23:03.229 fused_ordering(392) 00:23:03.229 fused_ordering(393) 00:23:03.229 fused_ordering(394) 00:23:03.229 fused_ordering(395) 00:23:03.229 fused_ordering(396) 00:23:03.229 fused_ordering(397) 00:23:03.229 fused_ordering(398) 00:23:03.229 fused_ordering(399) 00:23:03.229 fused_ordering(400) 00:23:03.229 fused_ordering(401) 00:23:03.229 fused_ordering(402) 00:23:03.229 fused_ordering(403) 00:23:03.229 fused_ordering(404) 00:23:03.229 fused_ordering(405) 00:23:03.229 fused_ordering(406) 00:23:03.229 fused_ordering(407) 00:23:03.229 fused_ordering(408) 00:23:03.229 fused_ordering(409) 00:23:03.229 fused_ordering(410) 00:23:03.229 fused_ordering(411) 00:23:03.229 fused_ordering(412) 00:23:03.229 fused_ordering(413) 00:23:03.229 fused_ordering(414) 00:23:03.229 fused_ordering(415) 00:23:03.229 fused_ordering(416) 00:23:03.229 fused_ordering(417) 00:23:03.229 fused_ordering(418) 00:23:03.229 fused_ordering(419) 00:23:03.229 fused_ordering(420) 00:23:03.229 fused_ordering(421) 00:23:03.229 fused_ordering(422) 00:23:03.229 fused_ordering(423) 00:23:03.229 fused_ordering(424) 00:23:03.229 fused_ordering(425) 00:23:03.229 fused_ordering(426) 00:23:03.229 fused_ordering(427) 00:23:03.229 fused_ordering(428) 00:23:03.229 fused_ordering(429) 00:23:03.229 fused_ordering(430) 00:23:03.229 fused_ordering(431) 00:23:03.229 fused_ordering(432) 00:23:03.229 fused_ordering(433) 00:23:03.229 fused_ordering(434) 00:23:03.229 fused_ordering(435) 00:23:03.229 fused_ordering(436) 00:23:03.229 fused_ordering(437) 00:23:03.229 fused_ordering(438) 00:23:03.229 fused_ordering(439) 00:23:03.229 fused_ordering(440) 00:23:03.229 fused_ordering(441) 00:23:03.229 fused_ordering(442) 00:23:03.229 fused_ordering(443) 00:23:03.229 fused_ordering(444) 00:23:03.229 fused_ordering(445) 00:23:03.229 fused_ordering(446) 00:23:03.229 fused_ordering(447) 00:23:03.229 fused_ordering(448) 00:23:03.229 fused_ordering(449) 00:23:03.229 fused_ordering(450) 00:23:03.229 fused_ordering(451) 00:23:03.229 fused_ordering(452) 00:23:03.229 fused_ordering(453) 00:23:03.229 fused_ordering(454) 00:23:03.229 fused_ordering(455) 00:23:03.229 fused_ordering(456) 00:23:03.229 fused_ordering(457) 00:23:03.229 fused_ordering(458) 00:23:03.229 fused_ordering(459) 00:23:03.229 fused_ordering(460) 00:23:03.229 fused_ordering(461) 00:23:03.229 fused_ordering(462) 00:23:03.229 fused_ordering(463) 00:23:03.229 fused_ordering(464) 00:23:03.229 fused_ordering(465) 00:23:03.229 fused_ordering(466) 00:23:03.229 fused_ordering(467) 00:23:03.229 fused_ordering(468) 00:23:03.229 fused_ordering(469) 00:23:03.229 fused_ordering(470) 00:23:03.229 fused_ordering(471) 00:23:03.229 fused_ordering(472) 00:23:03.229 fused_ordering(473) 00:23:03.229 fused_ordering(474) 00:23:03.229 fused_ordering(475) 00:23:03.229 fused_ordering(476) 00:23:03.229 fused_ordering(477) 00:23:03.229 fused_ordering(478) 00:23:03.229 fused_ordering(479) 00:23:03.229 fused_ordering(480) 00:23:03.229 fused_ordering(481) 00:23:03.229 fused_ordering(482) 00:23:03.229 fused_ordering(483) 00:23:03.229 fused_ordering(484) 00:23:03.229 fused_ordering(485) 00:23:03.229 fused_ordering(486) 00:23:03.229 fused_ordering(487) 00:23:03.229 fused_ordering(488) 00:23:03.229 fused_ordering(489) 00:23:03.229 fused_ordering(490) 00:23:03.229 fused_ordering(491) 00:23:03.229 fused_ordering(492) 00:23:03.229 fused_ordering(493) 00:23:03.229 fused_ordering(494) 00:23:03.230 fused_ordering(495) 00:23:03.230 fused_ordering(496) 00:23:03.230 fused_ordering(497) 00:23:03.230 fused_ordering(498) 00:23:03.230 fused_ordering(499) 00:23:03.230 fused_ordering(500) 00:23:03.230 fused_ordering(501) 00:23:03.230 fused_ordering(502) 00:23:03.230 fused_ordering(503) 00:23:03.230 fused_ordering(504) 00:23:03.230 fused_ordering(505) 00:23:03.230 fused_ordering(506) 00:23:03.230 fused_ordering(507) 00:23:03.230 fused_ordering(508) 00:23:03.230 fused_ordering(509) 00:23:03.230 fused_ordering(510) 00:23:03.230 fused_ordering(511) 00:23:03.230 fused_ordering(512) 00:23:03.230 fused_ordering(513) 00:23:03.230 fused_ordering(514) 00:23:03.230 fused_ordering(515) 00:23:03.230 fused_ordering(516) 00:23:03.230 fused_ordering(517) 00:23:03.230 fused_ordering(518) 00:23:03.230 fused_ordering(519) 00:23:03.230 fused_ordering(520) 00:23:03.230 fused_ordering(521) 00:23:03.230 fused_ordering(522) 00:23:03.230 fused_ordering(523) 00:23:03.230 fused_ordering(524) 00:23:03.230 fused_ordering(525) 00:23:03.230 fused_ordering(526) 00:23:03.230 fused_ordering(527) 00:23:03.230 fused_ordering(528) 00:23:03.230 fused_ordering(529) 00:23:03.230 fused_ordering(530) 00:23:03.230 fused_ordering(531) 00:23:03.230 fused_ordering(532) 00:23:03.230 fused_ordering(533) 00:23:03.230 fused_ordering(534) 00:23:03.230 fused_ordering(535) 00:23:03.230 fused_ordering(536) 00:23:03.230 fused_ordering(537) 00:23:03.230 fused_ordering(538) 00:23:03.230 fused_ordering(539) 00:23:03.230 fused_ordering(540) 00:23:03.230 fused_ordering(541) 00:23:03.230 fused_ordering(542) 00:23:03.230 fused_ordering(543) 00:23:03.230 fused_ordering(544) 00:23:03.230 fused_ordering(545) 00:23:03.230 fused_ordering(546) 00:23:03.230 fused_ordering(547) 00:23:03.230 fused_ordering(548) 00:23:03.230 fused_ordering(549) 00:23:03.230 fused_ordering(550) 00:23:03.230 fused_ordering(551) 00:23:03.230 fused_ordering(552) 00:23:03.230 fused_ordering(553) 00:23:03.230 fused_ordering(554) 00:23:03.230 fused_ordering(555) 00:23:03.230 fused_ordering(556) 00:23:03.230 fused_ordering(557) 00:23:03.230 fused_ordering(558) 00:23:03.230 fused_ordering(559) 00:23:03.230 fused_ordering(560) 00:23:03.230 fused_ordering(561) 00:23:03.230 fused_ordering(562) 00:23:03.230 fused_ordering(563) 00:23:03.230 fused_ordering(564) 00:23:03.230 fused_ordering(565) 00:23:03.230 fused_ordering(566) 00:23:03.230 fused_ordering(567) 00:23:03.230 fused_ordering(568) 00:23:03.230 fused_ordering(569) 00:23:03.230 fused_ordering(570) 00:23:03.230 fused_ordering(571) 00:23:03.230 fused_ordering(572) 00:23:03.230 fused_ordering(573) 00:23:03.230 fused_ordering(574) 00:23:03.230 fused_ordering(575) 00:23:03.230 fused_ordering(576) 00:23:03.230 fused_ordering(577) 00:23:03.230 fused_ordering(578) 00:23:03.230 fused_ordering(579) 00:23:03.230 fused_ordering(580) 00:23:03.230 fused_ordering(581) 00:23:03.230 fused_ordering(582) 00:23:03.230 fused_ordering(583) 00:23:03.230 fused_ordering(584) 00:23:03.230 fused_ordering(585) 00:23:03.230 fused_ordering(586) 00:23:03.230 fused_ordering(587) 00:23:03.230 fused_ordering(588) 00:23:03.230 fused_ordering(589) 00:23:03.230 fused_ordering(590) 00:23:03.230 fused_ordering(591) 00:23:03.230 fused_ordering(592) 00:23:03.230 fused_ordering(593) 00:23:03.230 fused_ordering(594) 00:23:03.230 fused_ordering(595) 00:23:03.230 fused_ordering(596) 00:23:03.230 fused_ordering(597) 00:23:03.230 fused_ordering(598) 00:23:03.230 fused_ordering(599) 00:23:03.230 fused_ordering(600) 00:23:03.230 fused_ordering(601) 00:23:03.230 fused_ordering(602) 00:23:03.230 fused_ordering(603) 00:23:03.230 fused_ordering(604) 00:23:03.230 fused_ordering(605) 00:23:03.230 fused_ordering(606) 00:23:03.230 fused_ordering(607) 00:23:03.230 fused_ordering(608) 00:23:03.230 fused_ordering(609) 00:23:03.230 fused_ordering(610) 00:23:03.230 fused_ordering(611) 00:23:03.230 fused_ordering(612) 00:23:03.230 fused_ordering(613) 00:23:03.230 fused_ordering(614) 00:23:03.230 fused_ordering(615) 00:23:03.489 fused_ordering(616) 00:23:03.489 fused_ordering(617) 00:23:03.489 fused_ordering(618) 00:23:03.489 fused_ordering(619) 00:23:03.489 fused_ordering(620) 00:23:03.489 fused_ordering(621) 00:23:03.489 fused_ordering(622) 00:23:03.489 fused_ordering(623) 00:23:03.489 fused_ordering(624) 00:23:03.489 fused_ordering(625) 00:23:03.489 fused_ordering(626) 00:23:03.489 fused_ordering(627) 00:23:03.489 fused_ordering(628) 00:23:03.489 fused_ordering(629) 00:23:03.489 fused_ordering(630) 00:23:03.489 fused_ordering(631) 00:23:03.489 fused_ordering(632) 00:23:03.489 fused_ordering(633) 00:23:03.489 fused_ordering(634) 00:23:03.489 fused_ordering(635) 00:23:03.489 fused_ordering(636) 00:23:03.489 fused_ordering(637) 00:23:03.489 fused_ordering(638) 00:23:03.489 fused_ordering(639) 00:23:03.489 fused_ordering(640) 00:23:03.489 fused_ordering(641) 00:23:03.489 fused_ordering(642) 00:23:03.489 fused_ordering(643) 00:23:03.489 fused_ordering(644) 00:23:03.489 fused_ordering(645) 00:23:03.489 fused_ordering(646) 00:23:03.489 fused_ordering(647) 00:23:03.489 fused_ordering(648) 00:23:03.489 fused_ordering(649) 00:23:03.489 fused_ordering(650) 00:23:03.489 fused_ordering(651) 00:23:03.489 fused_ordering(652) 00:23:03.489 fused_ordering(653) 00:23:03.489 fused_ordering(654) 00:23:03.489 fused_ordering(655) 00:23:03.489 fused_ordering(656) 00:23:03.489 fused_ordering(657) 00:23:03.489 fused_ordering(658) 00:23:03.489 fused_ordering(659) 00:23:03.489 fused_ordering(660) 00:23:03.489 fused_ordering(661) 00:23:03.489 fused_ordering(662) 00:23:03.489 fused_ordering(663) 00:23:03.489 fused_ordering(664) 00:23:03.489 fused_ordering(665) 00:23:03.489 fused_ordering(666) 00:23:03.489 fused_ordering(667) 00:23:03.489 fused_ordering(668) 00:23:03.489 fused_ordering(669) 00:23:03.489 fused_ordering(670) 00:23:03.489 fused_ordering(671) 00:23:03.489 fused_ordering(672) 00:23:03.489 fused_ordering(673) 00:23:03.489 fused_ordering(674) 00:23:03.489 fused_ordering(675) 00:23:03.489 fused_ordering(676) 00:23:03.489 fused_ordering(677) 00:23:03.489 fused_ordering(678) 00:23:03.489 fused_ordering(679) 00:23:03.489 fused_ordering(680) 00:23:03.489 fused_ordering(681) 00:23:03.489 fused_ordering(682) 00:23:03.489 fused_ordering(683) 00:23:03.489 fused_ordering(684) 00:23:03.489 fused_ordering(685) 00:23:03.489 fused_ordering(686) 00:23:03.489 fused_ordering(687) 00:23:03.489 fused_ordering(688) 00:23:03.489 fused_ordering(689) 00:23:03.489 fused_ordering(690) 00:23:03.489 fused_ordering(691) 00:23:03.489 fused_ordering(692) 00:23:03.489 fused_ordering(693) 00:23:03.489 fused_ordering(694) 00:23:03.489 fused_ordering(695) 00:23:03.489 fused_ordering(696) 00:23:03.489 fused_ordering(697) 00:23:03.489 fused_ordering(698) 00:23:03.489 fused_ordering(699) 00:23:03.489 fused_ordering(700) 00:23:03.489 fused_ordering(701) 00:23:03.489 fused_ordering(702) 00:23:03.489 fused_ordering(703) 00:23:03.489 fused_ordering(704) 00:23:03.489 fused_ordering(705) 00:23:03.489 fused_ordering(706) 00:23:03.489 fused_ordering(707) 00:23:03.489 fused_ordering(708) 00:23:03.489 fused_ordering(709) 00:23:03.489 fused_ordering(710) 00:23:03.489 fused_ordering(711) 00:23:03.489 fused_ordering(712) 00:23:03.489 fused_ordering(713) 00:23:03.489 fused_ordering(714) 00:23:03.489 fused_ordering(715) 00:23:03.489 fused_ordering(716) 00:23:03.489 fused_ordering(717) 00:23:03.489 fused_ordering(718) 00:23:03.489 fused_ordering(719) 00:23:03.489 fused_ordering(720) 00:23:03.489 fused_ordering(721) 00:23:03.489 fused_ordering(722) 00:23:03.489 fused_ordering(723) 00:23:03.489 fused_ordering(724) 00:23:03.489 fused_ordering(725) 00:23:03.489 fused_ordering(726) 00:23:03.489 fused_ordering(727) 00:23:03.489 fused_ordering(728) 00:23:03.489 fused_ordering(729) 00:23:03.489 fused_ordering(730) 00:23:03.489 fused_ordering(731) 00:23:03.489 fused_ordering(732) 00:23:03.489 fused_ordering(733) 00:23:03.489 fused_ordering(734) 00:23:03.489 fused_ordering(735) 00:23:03.489 fused_ordering(736) 00:23:03.489 fused_ordering(737) 00:23:03.489 fused_ordering(738) 00:23:03.489 fused_ordering(739) 00:23:03.489 fused_ordering(740) 00:23:03.489 fused_ordering(741) 00:23:03.489 fused_ordering(742) 00:23:03.489 fused_ordering(743) 00:23:03.489 fused_ordering(744) 00:23:03.489 fused_ordering(745) 00:23:03.489 fused_ordering(746) 00:23:03.489 fused_ordering(747) 00:23:03.489 fused_ordering(748) 00:23:03.489 fused_ordering(749) 00:23:03.489 fused_ordering(750) 00:23:03.489 fused_ordering(751) 00:23:03.489 fused_ordering(752) 00:23:03.489 fused_ordering(753) 00:23:03.489 fused_ordering(754) 00:23:03.489 fused_ordering(755) 00:23:03.489 fused_ordering(756) 00:23:03.489 fused_ordering(757) 00:23:03.489 fused_ordering(758) 00:23:03.489 fused_ordering(759) 00:23:03.489 fused_ordering(760) 00:23:03.489 fused_ordering(761) 00:23:03.489 fused_ordering(762) 00:23:03.489 fused_ordering(763) 00:23:03.489 fused_ordering(764) 00:23:03.489 fused_ordering(765) 00:23:03.489 fused_ordering(766) 00:23:03.489 fused_ordering(767) 00:23:03.489 fused_ordering(768) 00:23:03.489 fused_ordering(769) 00:23:03.489 fused_ordering(770) 00:23:03.489 fused_ordering(771) 00:23:03.489 fused_ordering(772) 00:23:03.489 fused_ordering(773) 00:23:03.489 fused_ordering(774) 00:23:03.489 fused_ordering(775) 00:23:03.489 fused_ordering(776) 00:23:03.489 fused_ordering(777) 00:23:03.489 fused_ordering(778) 00:23:03.489 fused_ordering(779) 00:23:03.489 fused_ordering(780) 00:23:03.489 fused_ordering(781) 00:23:03.489 fused_ordering(782) 00:23:03.489 fused_ordering(783) 00:23:03.489 fused_ordering(784) 00:23:03.489 fused_ordering(785) 00:23:03.489 fused_ordering(786) 00:23:03.489 fused_ordering(787) 00:23:03.489 fused_ordering(788) 00:23:03.489 fused_ordering(789) 00:23:03.489 fused_ordering(790) 00:23:03.489 fused_ordering(791) 00:23:03.489 fused_ordering(792) 00:23:03.489 fused_ordering(793) 00:23:03.489 fused_ordering(794) 00:23:03.489 fused_ordering(795) 00:23:03.489 fused_ordering(796) 00:23:03.489 fused_ordering(797) 00:23:03.489 fused_ordering(798) 00:23:03.489 fused_ordering(799) 00:23:03.489 fused_ordering(800) 00:23:03.489 fused_ordering(801) 00:23:03.489 fused_ordering(802) 00:23:03.489 fused_ordering(803) 00:23:03.489 fused_ordering(804) 00:23:03.489 fused_ordering(805) 00:23:03.489 fused_ordering(806) 00:23:03.489 fused_ordering(807) 00:23:03.489 fused_ordering(808) 00:23:03.489 fused_ordering(809) 00:23:03.489 fused_ordering(810) 00:23:03.489 fused_ordering(811) 00:23:03.489 fused_ordering(812) 00:23:03.489 fused_ordering(813) 00:23:03.489 fused_ordering(814) 00:23:03.489 fused_ordering(815) 00:23:03.489 fused_ordering(816) 00:23:03.489 fused_ordering(817) 00:23:03.489 fused_ordering(818) 00:23:03.489 fused_ordering(819) 00:23:03.489 fused_ordering(820) 00:23:03.490 fused_ordering(821) 00:23:03.490 fused_ordering(822) 00:23:03.490 fused_ordering(823) 00:23:03.490 fused_ordering(824) 00:23:03.490 fused_ordering(825) 00:23:03.490 fused_ordering(826) 00:23:03.490 fused_ordering(827) 00:23:03.490 fused_ordering(828) 00:23:03.490 fused_ordering(829) 00:23:03.490 fused_ordering(830) 00:23:03.490 fused_ordering(831) 00:23:03.490 fused_ordering(832) 00:23:03.490 fused_ordering(833) 00:23:03.490 fused_ordering(834) 00:23:03.490 fused_ordering(835) 00:23:03.490 fused_ordering(836) 00:23:03.490 fused_ordering(837) 00:23:03.490 fused_ordering(838) 00:23:03.490 fused_ordering(839) 00:23:03.490 fused_ordering(840) 00:23:03.490 fused_ordering(841) 00:23:03.490 fused_ordering(842) 00:23:03.490 fused_ordering(843) 00:23:03.490 fused_ordering(844) 00:23:03.490 fused_ordering(845) 00:23:03.490 fused_ordering(846) 00:23:03.490 fused_ordering(847) 00:23:03.490 fused_ordering(848) 00:23:03.490 fused_ordering(849) 00:23:03.490 fused_ordering(850) 00:23:03.490 fused_ordering(851) 00:23:03.490 fused_ordering(852) 00:23:03.490 fused_ordering(853) 00:23:03.490 fused_ordering(854) 00:23:03.490 fused_ordering(855) 00:23:03.490 fused_ordering(856) 00:23:03.490 fused_ordering(857) 00:23:03.490 fused_ordering(858) 00:23:03.490 fused_ordering(859) 00:23:03.490 fused_ordering(860) 00:23:03.490 fused_ordering(861) 00:23:03.490 fused_ordering(862) 00:23:03.490 fused_ordering(863) 00:23:03.490 fused_ordering(864) 00:23:03.490 fused_ordering(865) 00:23:03.490 fused_ordering(866) 00:23:03.490 fused_ordering(867) 00:23:03.490 fused_ordering(868) 00:23:03.490 fused_ordering(869) 00:23:03.490 fused_ordering(870) 00:23:03.490 fused_ordering(871) 00:23:03.490 fused_ordering(872) 00:23:03.490 fused_ordering(873) 00:23:03.490 fused_ordering(874) 00:23:03.490 fused_ordering(875) 00:23:03.490 fused_ordering(876) 00:23:03.490 fused_ordering(877) 00:23:03.490 fused_ordering(878) 00:23:03.490 fused_ordering(879) 00:23:03.490 fused_ordering(880) 00:23:03.490 fused_ordering(881) 00:23:03.490 fused_ordering(882) 00:23:03.490 fused_ordering(883) 00:23:03.490 fused_ordering(884) 00:23:03.490 fused_ordering(885) 00:23:03.490 fused_ordering(886) 00:23:03.490 fused_ordering(887) 00:23:03.490 fused_ordering(888) 00:23:03.490 fused_ordering(889) 00:23:03.490 fused_ordering(890) 00:23:03.490 fused_ordering(891) 00:23:03.490 fused_ordering(892) 00:23:03.490 fused_ordering(893) 00:23:03.490 fused_ordering(894) 00:23:03.490 fused_ordering(895) 00:23:03.490 fused_ordering(896) 00:23:03.490 fused_ordering(897) 00:23:03.490 fused_ordering(898) 00:23:03.490 fused_ordering(899) 00:23:03.490 fused_ordering(900) 00:23:03.490 fused_ordering(901) 00:23:03.490 fused_ordering(902) 00:23:03.490 fused_ordering(903) 00:23:03.490 fused_ordering(904) 00:23:03.490 fused_ordering(905) 00:23:03.490 fused_ordering(906) 00:23:03.490 fused_ordering(907) 00:23:03.490 fused_ordering(908) 00:23:03.490 fused_ordering(909) 00:23:03.490 fused_ordering(910) 00:23:03.490 fused_ordering(911) 00:23:03.490 fused_ordering(912) 00:23:03.490 fused_ordering(913) 00:23:03.490 fused_ordering(914) 00:23:03.490 fused_ordering(915) 00:23:03.490 fused_ordering(916) 00:23:03.490 fused_ordering(917) 00:23:03.490 fused_ordering(918) 00:23:03.490 fused_ordering(919) 00:23:03.490 fused_ordering(920) 00:23:03.490 fused_ordering(921) 00:23:03.490 fused_ordering(922) 00:23:03.490 fused_ordering(923) 00:23:03.490 fused_ordering(924) 00:23:03.490 fused_ordering(925) 00:23:03.490 fused_ordering(926) 00:23:03.490 fused_ordering(927) 00:23:03.490 fused_ordering(928) 00:23:03.490 fused_ordering(929) 00:23:03.490 fused_ordering(930) 00:23:03.490 fused_ordering(931) 00:23:03.490 fused_ordering(932) 00:23:03.490 fused_ordering(933) 00:23:03.490 fused_ordering(934) 00:23:03.490 fused_ordering(935) 00:23:03.490 fused_ordering(936) 00:23:03.490 fused_ordering(937) 00:23:03.490 fused_ordering(938) 00:23:03.490 fused_ordering(939) 00:23:03.490 fused_ordering(940) 00:23:03.490 fused_ordering(941) 00:23:03.490 fused_ordering(942) 00:23:03.490 fused_ordering(943) 00:23:03.490 fused_ordering(944) 00:23:03.490 fused_ordering(945) 00:23:03.490 fused_ordering(946) 00:23:03.490 fused_ordering(947) 00:23:03.490 fused_ordering(948) 00:23:03.490 fused_ordering(949) 00:23:03.490 fused_ordering(950) 00:23:03.490 fused_ordering(951) 00:23:03.490 fused_ordering(952) 00:23:03.490 fused_ordering(953) 00:23:03.490 fused_ordering(954) 00:23:03.490 fused_ordering(955) 00:23:03.490 fused_ordering(956) 00:23:03.490 fused_ordering(957) 00:23:03.490 fused_ordering(958) 00:23:03.490 fused_ordering(959) 00:23:03.490 fused_ordering(960) 00:23:03.490 fused_ordering(961) 00:23:03.490 fused_ordering(962) 00:23:03.490 fused_ordering(963) 00:23:03.490 fused_ordering(964) 00:23:03.490 fused_ordering(965) 00:23:03.490 fused_ordering(966) 00:23:03.490 fused_ordering(967) 00:23:03.490 fused_ordering(968) 00:23:03.490 fused_ordering(969) 00:23:03.490 fused_ordering(970) 00:23:03.490 fused_ordering(971) 00:23:03.490 fused_ordering(972) 00:23:03.490 fused_ordering(973) 00:23:03.490 fused_ordering(974) 00:23:03.490 fused_ordering(975) 00:23:03.490 fused_ordering(976) 00:23:03.490 fused_ordering(977) 00:23:03.490 fused_ordering(978) 00:23:03.490 fused_ordering(979) 00:23:03.490 fused_ordering(980) 00:23:03.490 fused_ordering(981) 00:23:03.490 fused_ordering(982) 00:23:03.490 fused_ordering(983) 00:23:03.490 fused_ordering(984) 00:23:03.490 fused_ordering(985) 00:23:03.490 fused_ordering(986) 00:23:03.490 fused_ordering(987) 00:23:03.490 fused_ordering(988) 00:23:03.490 fused_ordering(989) 00:23:03.490 fused_ordering(990) 00:23:03.490 fused_ordering(991) 00:23:03.490 fused_ordering(992) 00:23:03.490 fused_ordering(993) 00:23:03.490 fused_ordering(994) 00:23:03.490 fused_ordering(995) 00:23:03.490 fused_ordering(996) 00:23:03.490 fused_ordering(997) 00:23:03.490 fused_ordering(998) 00:23:03.490 fused_ordering(999) 00:23:03.490 fused_ordering(1000) 00:23:03.490 fused_ordering(1001) 00:23:03.490 fused_ordering(1002) 00:23:03.490 fused_ordering(1003) 00:23:03.490 fused_ordering(1004) 00:23:03.490 fused_ordering(1005) 00:23:03.490 fused_ordering(1006) 00:23:03.490 fused_ordering(1007) 00:23:03.490 fused_ordering(1008) 00:23:03.490 fused_ordering(1009) 00:23:03.490 fused_ordering(1010) 00:23:03.490 fused_ordering(1011) 00:23:03.490 fused_ordering(1012) 00:23:03.490 fused_ordering(1013) 00:23:03.490 fused_ordering(1014) 00:23:03.490 fused_ordering(1015) 00:23:03.490 fused_ordering(1016) 00:23:03.490 fused_ordering(1017) 00:23:03.490 fused_ordering(1018) 00:23:03.490 fused_ordering(1019) 00:23:03.490 fused_ordering(1020) 00:23:03.490 fused_ordering(1021) 00:23:03.490 fused_ordering(1022) 00:23:03.490 fused_ordering(1023) 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:03.490 rmmod nvme_rdma 00:23:03.490 rmmod nvme_fabrics 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1747375 ']' 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1747375 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1747375 ']' 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1747375 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.490 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1747375 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1747375' 00:23:03.749 killing process with pid 1747375 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1747375 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1747375 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:03.749 00:23:03.749 real 0m7.591s 00:23:03.749 user 0m3.715s 00:23:03.749 sys 0m4.978s 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:23:03.749 ************************************ 00:23:03.749 END TEST nvmf_fused_ordering 00:23:03.749 ************************************ 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.749 13:54:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:04.009 ************************************ 00:23:04.009 START TEST nvmf_ns_masking 00:23:04.009 ************************************ 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:23:04.009 * Looking for test storage... 00:23:04.009 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:04.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.009 --rc genhtml_branch_coverage=1 00:23:04.009 --rc genhtml_function_coverage=1 00:23:04.009 --rc genhtml_legend=1 00:23:04.009 --rc geninfo_all_blocks=1 00:23:04.009 --rc geninfo_unexecuted_blocks=1 00:23:04.009 00:23:04.009 ' 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:04.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.009 --rc genhtml_branch_coverage=1 00:23:04.009 --rc genhtml_function_coverage=1 00:23:04.009 --rc genhtml_legend=1 00:23:04.009 --rc geninfo_all_blocks=1 00:23:04.009 --rc geninfo_unexecuted_blocks=1 00:23:04.009 00:23:04.009 ' 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:04.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.009 --rc genhtml_branch_coverage=1 00:23:04.009 --rc genhtml_function_coverage=1 00:23:04.009 --rc genhtml_legend=1 00:23:04.009 --rc geninfo_all_blocks=1 00:23:04.009 --rc geninfo_unexecuted_blocks=1 00:23:04.009 00:23:04.009 ' 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:04.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.009 --rc genhtml_branch_coverage=1 00:23:04.009 --rc genhtml_function_coverage=1 00:23:04.009 --rc genhtml_legend=1 00:23:04.009 --rc geninfo_all_blocks=1 00:23:04.009 --rc geninfo_unexecuted_blocks=1 00:23:04.009 00:23:04.009 ' 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.009 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:04.010 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=196f4295-2218-45ea-85eb-27c9e85e5546 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d9903aee-d295-487b-8eaa-87ca9642a670 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3717d7d9-240d-4df7-b067-7193f501653c 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.010 13:54:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:10.577 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:10.577 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:10.577 Found net devices under 0000:18:00.0: mlx_0_0 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:10.577 Found net devices under 0000:18:00.1: mlx_0_1 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:10.577 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:10.578 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:10.578 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:10.578 altname enp24s0f0np0 00:23:10.578 altname ens785f0np0 00:23:10.578 inet 192.168.100.8/24 scope global mlx_0_0 00:23:10.578 valid_lft forever preferred_lft forever 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:10.578 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:10.578 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:10.578 altname enp24s0f1np1 00:23:10.578 altname ens785f1np1 00:23:10.578 inet 192.168.100.9/24 scope global mlx_0_1 00:23:10.578 valid_lft forever preferred_lft forever 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:10.578 192.168.100.9' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:10.578 192.168.100.9' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:10.578 192.168.100.9' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1751415 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1751415 00:23:10.578 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:10.579 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1751415 ']' 00:23:10.579 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.579 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.579 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.579 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.579 13:54:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:10.579 [2024-12-05 13:54:09.946071] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:23:10.579 [2024-12-05 13:54:09.946124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.579 [2024-12-05 13:54:10.020879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.579 [2024-12-05 13:54:10.045141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.579 [2024-12-05 13:54:10.045178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.579 [2024-12-05 13:54:10.045185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.579 [2024-12-05 13:54:10.045190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.579 [2024-12-05 13:54:10.045195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.579 [2024-12-05 13:54:10.045648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:10.579 [2024-12-05 13:54:10.354623] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13fcd00/0x14011f0) succeed. 00:23:10.579 [2024-12-05 13:54:10.364122] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13fe1b0/0x1442890) succeed. 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:23:10.579 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:10.837 Malloc1 00:23:10.837 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:23:11.099 Malloc2 00:23:11.099 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:11.394 13:54:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:23:11.394 13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:11.671 [2024-12-05 13:54:11.336240] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:11.671 13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:23:11.671 13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3717d7d9-240d-4df7-b067-7193f501653c -a 192.168.100.8 -s 4420 -i 4 00:23:11.929 13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:23:11.929 13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:23:11.929 13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:11.929 13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:11.929 13:54:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:23:13.831 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:13.831 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:13.831 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:14.091 [ 0]:0x1 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad1c1c621c0e45fda0fb7ecae6172da6 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad1c1c621c0e45fda0fb7ecae6172da6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:14.091 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:23:14.350 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:23:14.350 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:14.351 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:14.351 [ 0]:0x1 00:23:14.351 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:14.351 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:14.351 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad1c1c621c0e45fda0fb7ecae6172da6 00:23:14.351 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad1c1c621c0e45fda0fb7ecae6172da6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:14.351 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:23:14.351 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:14.351 13:54:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:14.351 [ 1]:0x2 00:23:14.351 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:14.351 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:14.351 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25da3438dbd14dab9cdff540f6de946b 00:23:14.351 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25da3438dbd14dab9cdff540f6de946b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:14.351 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:23:14.351 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:14.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:14.609 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:14.866 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:23:15.124 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:23:15.124 13:54:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3717d7d9-240d-4df7-b067-7193f501653c -a 192.168.100.8 -s 4420 -i 4 00:23:15.381 13:54:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:23:15.381 13:54:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:23:15.381 13:54:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:15.381 13:54:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:23:15.381 13:54:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:23:15.381 13:54:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:17.289 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:17.546 [ 0]:0x2 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25da3438dbd14dab9cdff540f6de946b 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25da3438dbd14dab9cdff540f6de946b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:17.546 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:17.804 [ 0]:0x1 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad1c1c621c0e45fda0fb7ecae6172da6 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad1c1c621c0e45fda0fb7ecae6172da6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:17.804 [ 1]:0x2 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25da3438dbd14dab9cdff540f6de946b 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25da3438dbd14dab9cdff540f6de946b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:17.804 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:18.061 [ 0]:0x2 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25da3438dbd14dab9cdff540f6de946b 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25da3438dbd14dab9cdff540f6de946b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:23:18.061 13:54:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:18.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:18.318 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:18.594 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:23:18.594 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3717d7d9-240d-4df7-b067-7193f501653c -a 192.168.100.8 -s 4420 -i 4 00:23:18.852 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:23:18.852 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:23:18.852 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:18.852 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:23:18.852 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:23:18.852 13:54:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:21.385 [ 0]:0x1 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad1c1c621c0e45fda0fb7ecae6172da6 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad1c1c621c0e45fda0fb7ecae6172da6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:21.385 [ 1]:0x2 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25da3438dbd14dab9cdff540f6de946b 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25da3438dbd14dab9cdff540f6de946b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:21.385 13:54:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:21.385 [ 0]:0x2 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25da3438dbd14dab9cdff540f6de946b 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25da3438dbd14dab9cdff540f6de946b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:23:21.385 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:23:21.644 [2024-12-05 13:54:21.260644] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:23:21.644 request: 00:23:21.644 { 00:23:21.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.644 "nsid": 2, 00:23:21.644 "host": "nqn.2016-06.io.spdk:host1", 00:23:21.644 "method": "nvmf_ns_remove_host", 00:23:21.644 "req_id": 1 00:23:21.644 } 00:23:21.644 Got JSON-RPC error response 00:23:21.644 response: 00:23:21.644 { 00:23:21.644 "code": -32602, 00:23:21.644 "message": "Invalid parameters" 00:23:21.644 } 00:23:21.644 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:23:21.644 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:23:21.645 [ 0]:0x2 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=25da3438dbd14dab9cdff540f6de946b 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 25da3438dbd14dab9cdff540f6de946b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:23:21.645 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:21.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:21.903 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1753682 00:23:21.903 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.903 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:23:21.903 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1753682 /var/tmp/host.sock 00:23:21.903 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1753682 ']' 00:23:21.903 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:23:21.903 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.904 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:23:21.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:23:21.904 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.904 13:54:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:21.904 [2024-12-05 13:54:21.740522] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:23:21.904 [2024-12-05 13:54:21.740582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753682 ] 00:23:22.162 [2024-12-05 13:54:21.813251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.162 [2024-12-05 13:54:21.834873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.420 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.420 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:23:22.420 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:22.420 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:22.678 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 196f4295-2218-45ea-85eb-27c9e85e5546 00:23:22.678 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:23:22.678 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 196F4295221845EA85EB27C9E85E5546 -i 00:23:22.936 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d9903aee-d295-487b-8eaa-87ca9642a670 00:23:22.936 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:23:22.936 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D9903AEED295487B8EAA87CA9642A670 -i 00:23:22.936 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:23:23.194 13:54:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:23:23.452 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:23:23.452 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:23:23.711 nvme0n1 00:23:23.711 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:23:23.711 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:23:23.969 nvme1n2 00:23:23.969 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:23:23.969 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:23:23.969 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:23:23.969 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:23:23.969 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:23:24.228 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:23:24.228 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:23:24.228 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:23:24.228 13:54:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:23:24.228 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 196f4295-2218-45ea-85eb-27c9e85e5546 == \1\9\6\f\4\2\9\5\-\2\2\1\8\-\4\5\e\a\-\8\5\e\b\-\2\7\c\9\e\8\5\e\5\5\4\6 ]] 00:23:24.228 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:23:24.228 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:23:24.228 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:23:24.486 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d9903aee-d295-487b-8eaa-87ca9642a670 == \d\9\9\0\3\a\e\e\-\d\2\9\5\-\4\8\7\b\-\8\e\a\a\-\8\7\c\a\9\6\4\2\a\6\7\0 ]] 00:23:24.486 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 196f4295-2218-45ea-85eb-27c9e85e5546 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 196F4295221845EA85EB27C9E85E5546 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 196F4295221845EA85EB27C9E85E5546 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:23:24.744 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 196F4295221845EA85EB27C9E85E5546 00:23:25.001 [2024-12-05 13:54:24.745640] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:23:25.001 [2024-12-05 13:54:24.745669] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:23:25.001 [2024-12-05 13:54:24.745677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:25.001 request: 00:23:25.001 { 00:23:25.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.001 "namespace": { 00:23:25.001 "bdev_name": "invalid", 00:23:25.001 "nsid": 1, 00:23:25.001 "nguid": "196F4295221845EA85EB27C9E85E5546", 00:23:25.001 "no_auto_visible": false, 00:23:25.001 "hide_metadata": false 00:23:25.001 }, 00:23:25.001 "method": "nvmf_subsystem_add_ns", 00:23:25.001 "req_id": 1 00:23:25.001 } 00:23:25.001 Got JSON-RPC error response 00:23:25.002 response: 00:23:25.002 { 00:23:25.002 "code": -32602, 00:23:25.002 "message": "Invalid parameters" 00:23:25.002 } 00:23:25.002 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:23:25.002 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.002 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.002 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.002 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 196f4295-2218-45ea-85eb-27c9e85e5546 00:23:25.002 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:23:25.002 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 196F4295221845EA85EB27C9E85E5546 -i 00:23:25.259 13:54:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:23:27.156 13:54:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:23:27.156 13:54:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:23:27.156 13:54:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1753682 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1753682 ']' 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1753682 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1753682 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1753682' 00:23:27.415 killing process with pid 1753682 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1753682 00:23:27.415 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1753682 00:23:27.673 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:27.932 rmmod nvme_rdma 00:23:27.932 rmmod nvme_fabrics 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1751415 ']' 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1751415 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1751415 ']' 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1751415 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.932 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1751415 00:23:28.191 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.191 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.191 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1751415' 00:23:28.191 killing process with pid 1751415 00:23:28.191 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1751415 00:23:28.191 13:54:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1751415 00:23:28.191 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.191 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:28.191 00:23:28.191 real 0m24.400s 00:23:28.191 user 0m31.152s 00:23:28.191 sys 0m6.610s 00:23:28.191 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.191 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:23:28.191 ************************************ 00:23:28.191 END TEST nvmf_ns_masking 00:23:28.191 ************************************ 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:28.451 ************************************ 00:23:28.451 START TEST nvmf_nvme_cli 00:23:28.451 ************************************ 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:23:28.451 * Looking for test storage... 00:23:28.451 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:28.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.451 --rc genhtml_branch_coverage=1 00:23:28.451 --rc genhtml_function_coverage=1 00:23:28.451 --rc genhtml_legend=1 00:23:28.451 --rc geninfo_all_blocks=1 00:23:28.451 --rc geninfo_unexecuted_blocks=1 00:23:28.451 00:23:28.451 ' 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:28.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.451 --rc genhtml_branch_coverage=1 00:23:28.451 --rc genhtml_function_coverage=1 00:23:28.451 --rc genhtml_legend=1 00:23:28.451 --rc geninfo_all_blocks=1 00:23:28.451 --rc geninfo_unexecuted_blocks=1 00:23:28.451 00:23:28.451 ' 00:23:28.451 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:28.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.452 --rc genhtml_branch_coverage=1 00:23:28.452 --rc genhtml_function_coverage=1 00:23:28.452 --rc genhtml_legend=1 00:23:28.452 --rc geninfo_all_blocks=1 00:23:28.452 --rc geninfo_unexecuted_blocks=1 00:23:28.452 00:23:28.452 ' 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:28.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:28.452 --rc genhtml_branch_coverage=1 00:23:28.452 --rc genhtml_function_coverage=1 00:23:28.452 --rc genhtml_legend=1 00:23:28.452 --rc geninfo_all_blocks=1 00:23:28.452 --rc geninfo_unexecuted_blocks=1 00:23:28.452 00:23:28.452 ' 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:28.452 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:23:28.452 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:23:28.711 13:54:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:35.280 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:35.280 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:35.280 Found net devices under 0000:18:00.0: mlx_0_0 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:35.280 Found net devices under 0000:18:00.1: mlx_0_1 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.280 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:35.281 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:35.281 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:35.281 altname enp24s0f0np0 00:23:35.281 altname ens785f0np0 00:23:35.281 inet 192.168.100.8/24 scope global mlx_0_0 00:23:35.281 valid_lft forever preferred_lft forever 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:35.281 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:35.281 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:35.281 altname enp24s0f1np1 00:23:35.281 altname ens785f1np1 00:23:35.281 inet 192.168.100.9/24 scope global mlx_0_1 00:23:35.281 valid_lft forever preferred_lft forever 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:35.281 192.168.100.9' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:35.281 192.168.100.9' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:35.281 192.168.100.9' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:35.281 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1758284 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1758284 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1758284 ']' 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 [2024-12-05 13:54:34.394118] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:23:35.282 [2024-12-05 13:54:34.394161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.282 [2024-12-05 13:54:34.467061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.282 [2024-12-05 13:54:34.489806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.282 [2024-12-05 13:54:34.489843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.282 [2024-12-05 13:54:34.489850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.282 [2024-12-05 13:54:34.489856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.282 [2024-12-05 13:54:34.489860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.282 [2024-12-05 13:54:34.491035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.282 [2024-12-05 13:54:34.491146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.282 [2024-12-05 13:54:34.491252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.282 [2024-12-05 13:54:34.491251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 [2024-12-05 13:54:34.641249] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20c1f30/0x20c6420) succeed. 00:23:35.282 [2024-12-05 13:54:34.649418] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20c35c0/0x2107ac0) succeed. 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 Malloc0 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 Malloc1 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 [2024-12-05 13:54:34.844674] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:23:35.282 00:23:35.282 Discovery Log Number of Records 2, Generation counter 2 00:23:35.282 =====Discovery Log Entry 0====== 00:23:35.282 trtype: rdma 00:23:35.282 adrfam: ipv4 00:23:35.282 subtype: current discovery subsystem 00:23:35.282 treq: not required 00:23:35.282 portid: 0 00:23:35.282 trsvcid: 4420 00:23:35.282 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:35.282 traddr: 192.168.100.8 00:23:35.282 eflags: explicit discovery connections, duplicate discovery information 00:23:35.282 rdma_prtype: not specified 00:23:35.282 rdma_qptype: connected 00:23:35.282 rdma_cms: rdma-cm 00:23:35.282 rdma_pkey: 0x0000 00:23:35.282 =====Discovery Log Entry 1====== 00:23:35.282 trtype: rdma 00:23:35.282 adrfam: ipv4 00:23:35.282 subtype: nvme subsystem 00:23:35.282 treq: not required 00:23:35.282 portid: 0 00:23:35.282 trsvcid: 4420 00:23:35.282 subnqn: nqn.2016-06.io.spdk:cnode1 00:23:35.282 traddr: 192.168.100.8 00:23:35.282 eflags: none 00:23:35.282 rdma_prtype: not specified 00:23:35.282 rdma_qptype: connected 00:23:35.282 rdma_cms: rdma-cm 00:23:35.282 rdma_pkey: 0x0000 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:23:35.282 13:54:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:36.233 13:54:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:23:36.233 13:54:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:23:36.233 13:54:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:36.233 13:54:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:23:36.233 13:54:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:23:36.233 13:54:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:23:38.134 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:38.134 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:38.134 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:38.392 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:23:38.392 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:38.392 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:23:38.392 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:23:38.392 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:23:38.392 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:23:38.392 /dev/nvme0n2 ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:23:38.392 13:54:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:39.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.328 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:39.329 rmmod nvme_rdma 00:23:39.329 rmmod nvme_fabrics 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1758284 ']' 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1758284 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1758284 ']' 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1758284 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.329 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1758284 00:23:39.587 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.587 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.587 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1758284' 00:23:39.587 killing process with pid 1758284 00:23:39.587 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1758284 00:23:39.587 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1758284 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:39.845 00:23:39.845 real 0m11.384s 00:23:39.845 user 0m21.512s 00:23:39.845 sys 0m5.103s 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:23:39.845 ************************************ 00:23:39.845 END TEST nvmf_nvme_cli 00:23:39.845 ************************************ 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:39.845 ************************************ 00:23:39.845 START TEST nvmf_auth_target 00:23:39.845 ************************************ 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:23:39.845 * Looking for test storage... 00:23:39.845 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:23:39.845 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.104 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:40.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.105 --rc genhtml_branch_coverage=1 00:23:40.105 --rc genhtml_function_coverage=1 00:23:40.105 --rc genhtml_legend=1 00:23:40.105 --rc geninfo_all_blocks=1 00:23:40.105 --rc geninfo_unexecuted_blocks=1 00:23:40.105 00:23:40.105 ' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:40.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.105 --rc genhtml_branch_coverage=1 00:23:40.105 --rc genhtml_function_coverage=1 00:23:40.105 --rc genhtml_legend=1 00:23:40.105 --rc geninfo_all_blocks=1 00:23:40.105 --rc geninfo_unexecuted_blocks=1 00:23:40.105 00:23:40.105 ' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:40.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.105 --rc genhtml_branch_coverage=1 00:23:40.105 --rc genhtml_function_coverage=1 00:23:40.105 --rc genhtml_legend=1 00:23:40.105 --rc geninfo_all_blocks=1 00:23:40.105 --rc geninfo_unexecuted_blocks=1 00:23:40.105 00:23:40.105 ' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:40.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.105 --rc genhtml_branch_coverage=1 00:23:40.105 --rc genhtml_function_coverage=1 00:23:40.105 --rc genhtml_legend=1 00:23:40.105 --rc geninfo_all_blocks=1 00:23:40.105 --rc geninfo_unexecuted_blocks=1 00:23:40.105 00:23:40.105 ' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.105 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.105 13:54:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:46.673 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:46.673 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:46.673 Found net devices under 0000:18:00.0: mlx_0_0 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:46.673 Found net devices under 0000:18:00.1: mlx_0_1 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:46.673 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:46.674 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:46.674 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:46.674 altname enp24s0f0np0 00:23:46.674 altname ens785f0np0 00:23:46.674 inet 192.168.100.8/24 scope global mlx_0_0 00:23:46.674 valid_lft forever preferred_lft forever 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:46.674 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:46.674 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:46.674 altname enp24s0f1np1 00:23:46.674 altname ens785f1np1 00:23:46.674 inet 192.168.100.9/24 scope global mlx_0_1 00:23:46.674 valid_lft forever preferred_lft forever 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:46.674 192.168.100.9' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:46.674 192.168.100.9' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:46.674 192.168.100.9' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1762494 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1762494 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1762494 ']' 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.674 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.675 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.675 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.675 13:54:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1762645 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=803d5e01e31d6e79f948d645f98bdbe1457b39e8f0c918d0 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dFq 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 803d5e01e31d6e79f948d645f98bdbe1457b39e8f0c918d0 0 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 803d5e01e31d6e79f948d645f98bdbe1457b39e8f0c918d0 0 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=803d5e01e31d6e79f948d645f98bdbe1457b39e8f0c918d0 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dFq 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dFq 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.dFq 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3da8c3daeb2f08e5fee76675abb3cf1a7fd440c5910b1bbba5da34fcc5e1a284 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SQp 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3da8c3daeb2f08e5fee76675abb3cf1a7fd440c5910b1bbba5da34fcc5e1a284 3 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3da8c3daeb2f08e5fee76675abb3cf1a7fd440c5910b1bbba5da34fcc5e1a284 3 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3da8c3daeb2f08e5fee76675abb3cf1a7fd440c5910b1bbba5da34fcc5e1a284 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SQp 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SQp 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.SQp 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=97c54fd897d93b0694409593684aff45 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.LYl 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 97c54fd897d93b0694409593684aff45 1 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 97c54fd897d93b0694409593684aff45 1 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=97c54fd897d93b0694409593684aff45 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.LYl 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.LYl 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.LYl 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ac2d73c071c9d29f6ef5cb61e81191a31ebe581811a153f0 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GcJ 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ac2d73c071c9d29f6ef5cb61e81191a31ebe581811a153f0 2 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ac2d73c071c9d29f6ef5cb61e81191a31ebe581811a153f0 2 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ac2d73c071c9d29f6ef5cb61e81191a31ebe581811a153f0 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GcJ 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GcJ 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.GcJ 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c9d86be1c051cda4a0fefebfc1002e31bde101469809904c 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dgD 00:23:46.675 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c9d86be1c051cda4a0fefebfc1002e31bde101469809904c 2 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c9d86be1c051cda4a0fefebfc1002e31bde101469809904c 2 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c9d86be1c051cda4a0fefebfc1002e31bde101469809904c 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dgD 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dgD 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.dgD 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=981ef43cb1cefe06870f9847ba80beda 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NtL 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 981ef43cb1cefe06870f9847ba80beda 1 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 981ef43cb1cefe06870f9847ba80beda 1 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=981ef43cb1cefe06870f9847ba80beda 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NtL 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NtL 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.NtL 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0560d4f970de8fff3b51fe01e973a69c10d085942088e23ca05ebb231a8ece6a 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WJO 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0560d4f970de8fff3b51fe01e973a69c10d085942088e23ca05ebb231a8ece6a 3 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0560d4f970de8fff3b51fe01e973a69c10d085942088e23ca05ebb231a8ece6a 3 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0560d4f970de8fff3b51fe01e973a69c10d085942088e23ca05ebb231a8ece6a 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WJO 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WJO 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.WJO 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1762494 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1762494 ']' 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.676 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.952 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.952 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:46.952 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1762645 /var/tmp/host.sock 00:23:46.952 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1762645 ']' 00:23:46.952 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:23:46.952 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.952 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:23:46.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:23:46.953 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.953 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dFq 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.dFq 00:23:47.212 13:54:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.dFq 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.SQp ]] 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SQp 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SQp 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SQp 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.LYl 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.471 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.LYl 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.LYl 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.GcJ ]] 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GcJ 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GcJ 00:23:47.730 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GcJ 00:23:47.988 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:23:47.988 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.dgD 00:23:47.988 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.988 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.988 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.988 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.dgD 00:23:47.988 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.dgD 00:23:48.247 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.NtL ]] 00:23:48.247 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NtL 00:23:48.247 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.247 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.247 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.247 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NtL 00:23:48.247 13:54:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NtL 00:23:48.247 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:23:48.247 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.WJO 00:23:48.247 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.247 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.247 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.247 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.WJO 00:23:48.247 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.WJO 00:23:48.506 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:23:48.506 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:48.506 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.506 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:48.506 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:48.506 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.764 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.079 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:49.079 { 00:23:49.079 "cntlid": 1, 00:23:49.079 "qid": 0, 00:23:49.079 "state": "enabled", 00:23:49.079 "thread": "nvmf_tgt_poll_group_000", 00:23:49.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:23:49.079 "listen_address": { 00:23:49.079 "trtype": "RDMA", 00:23:49.079 "adrfam": "IPv4", 00:23:49.079 "traddr": "192.168.100.8", 00:23:49.079 "trsvcid": "4420" 00:23:49.079 }, 00:23:49.079 "peer_address": { 00:23:49.079 "trtype": "RDMA", 00:23:49.079 "adrfam": "IPv4", 00:23:49.079 "traddr": "192.168.100.8", 00:23:49.079 "trsvcid": "59003" 00:23:49.079 }, 00:23:49.079 "auth": { 00:23:49.079 "state": "completed", 00:23:49.079 "digest": "sha256", 00:23:49.079 "dhgroup": "null" 00:23:49.079 } 00:23:49.079 } 00:23:49.079 ]' 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:49.079 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:49.338 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:49.338 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:49.338 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.338 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.338 13:54:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.338 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:23:49.338 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:23:50.273 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.273 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:50.273 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.273 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.273 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.273 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:50.273 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:50.273 13:54:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.531 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.790 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:50.790 { 00:23:50.790 "cntlid": 3, 00:23:50.790 "qid": 0, 00:23:50.790 "state": "enabled", 00:23:50.790 "thread": "nvmf_tgt_poll_group_000", 00:23:50.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:23:50.790 "listen_address": { 00:23:50.790 "trtype": "RDMA", 00:23:50.790 "adrfam": "IPv4", 00:23:50.790 "traddr": "192.168.100.8", 00:23:50.790 "trsvcid": "4420" 00:23:50.790 }, 00:23:50.790 "peer_address": { 00:23:50.790 "trtype": "RDMA", 00:23:50.790 "adrfam": "IPv4", 00:23:50.790 "traddr": "192.168.100.8", 00:23:50.790 "trsvcid": "35254" 00:23:50.790 }, 00:23:50.790 "auth": { 00:23:50.790 "state": "completed", 00:23:50.790 "digest": "sha256", 00:23:50.790 "dhgroup": "null" 00:23:50.790 } 00:23:50.790 } 00:23:50.790 ]' 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:50.790 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:51.049 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:51.049 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:51.049 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:51.049 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:51.049 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.307 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:23:51.307 13:54:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:23:51.875 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.875 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:51.875 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.875 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.875 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.875 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:51.875 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:51.875 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:52.134 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:23:52.134 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:52.134 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:52.134 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.135 13:54:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.393 00:23:52.393 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:52.393 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:52.393 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.393 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.393 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.393 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.393 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.652 { 00:23:52.652 "cntlid": 5, 00:23:52.652 "qid": 0, 00:23:52.652 "state": "enabled", 00:23:52.652 "thread": "nvmf_tgt_poll_group_000", 00:23:52.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:23:52.652 "listen_address": { 00:23:52.652 "trtype": "RDMA", 00:23:52.652 "adrfam": "IPv4", 00:23:52.652 "traddr": "192.168.100.8", 00:23:52.652 "trsvcid": "4420" 00:23:52.652 }, 00:23:52.652 "peer_address": { 00:23:52.652 "trtype": "RDMA", 00:23:52.652 "adrfam": "IPv4", 00:23:52.652 "traddr": "192.168.100.8", 00:23:52.652 "trsvcid": "52888" 00:23:52.652 }, 00:23:52.652 "auth": { 00:23:52.652 "state": "completed", 00:23:52.652 "digest": "sha256", 00:23:52.652 "dhgroup": "null" 00:23:52.652 } 00:23:52.652 } 00:23:52.652 ]' 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.652 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.911 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:23:52.911 13:54:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:23:53.480 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.480 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:53.480 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.480 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.480 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.480 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:53.480 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:53.480 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:53.739 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:53.997 00:23:53.997 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:53.997 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:53.997 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:54.257 { 00:23:54.257 "cntlid": 7, 00:23:54.257 "qid": 0, 00:23:54.257 "state": "enabled", 00:23:54.257 "thread": "nvmf_tgt_poll_group_000", 00:23:54.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:23:54.257 "listen_address": { 00:23:54.257 "trtype": "RDMA", 00:23:54.257 "adrfam": "IPv4", 00:23:54.257 "traddr": "192.168.100.8", 00:23:54.257 "trsvcid": "4420" 00:23:54.257 }, 00:23:54.257 "peer_address": { 00:23:54.257 "trtype": "RDMA", 00:23:54.257 "adrfam": "IPv4", 00:23:54.257 "traddr": "192.168.100.8", 00:23:54.257 "trsvcid": "49634" 00:23:54.257 }, 00:23:54.257 "auth": { 00:23:54.257 "state": "completed", 00:23:54.257 "digest": "sha256", 00:23:54.257 "dhgroup": "null" 00:23:54.257 } 00:23:54.257 } 00:23:54.257 ]' 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:54.257 13:54:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.257 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.257 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.257 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.515 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:23:54.515 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:23:55.081 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.081 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:55.082 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.082 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.082 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.082 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.082 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:55.082 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:55.082 13:54:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.340 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.597 00:23:55.597 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:55.597 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.597 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:55.855 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.855 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:55.855 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.855 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.855 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.855 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:55.855 { 00:23:55.855 "cntlid": 9, 00:23:55.855 "qid": 0, 00:23:55.856 "state": "enabled", 00:23:55.856 "thread": "nvmf_tgt_poll_group_000", 00:23:55.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:23:55.856 "listen_address": { 00:23:55.856 "trtype": "RDMA", 00:23:55.856 "adrfam": "IPv4", 00:23:55.856 "traddr": "192.168.100.8", 00:23:55.856 "trsvcid": "4420" 00:23:55.856 }, 00:23:55.856 "peer_address": { 00:23:55.856 "trtype": "RDMA", 00:23:55.856 "adrfam": "IPv4", 00:23:55.856 "traddr": "192.168.100.8", 00:23:55.856 "trsvcid": "46720" 00:23:55.856 }, 00:23:55.856 "auth": { 00:23:55.856 "state": "completed", 00:23:55.856 "digest": "sha256", 00:23:55.856 "dhgroup": "ffdhe2048" 00:23:55.856 } 00:23:55.856 } 00:23:55.856 ]' 00:23:55.856 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:55.856 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:55.856 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:55.856 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:55.856 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:55.856 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:55.856 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.856 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.114 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:23:56.114 13:54:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:23:56.681 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.938 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:56.938 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.938 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.939 13:54:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.197 00:23:57.197 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:57.197 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:57.197 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:57.455 { 00:23:57.455 "cntlid": 11, 00:23:57.455 "qid": 0, 00:23:57.455 "state": "enabled", 00:23:57.455 "thread": "nvmf_tgt_poll_group_000", 00:23:57.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:23:57.455 "listen_address": { 00:23:57.455 "trtype": "RDMA", 00:23:57.455 "adrfam": "IPv4", 00:23:57.455 "traddr": "192.168.100.8", 00:23:57.455 "trsvcid": "4420" 00:23:57.455 }, 00:23:57.455 "peer_address": { 00:23:57.455 "trtype": "RDMA", 00:23:57.455 "adrfam": "IPv4", 00:23:57.455 "traddr": "192.168.100.8", 00:23:57.455 "trsvcid": "46674" 00:23:57.455 }, 00:23:57.455 "auth": { 00:23:57.455 "state": "completed", 00:23:57.455 "digest": "sha256", 00:23:57.455 "dhgroup": "ffdhe2048" 00:23:57.455 } 00:23:57.455 } 00:23:57.455 ]' 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:57.455 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:57.713 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.713 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.713 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.713 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:23:57.713 13:54:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:23:58.279 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.537 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:23:58.537 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.537 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.537 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.537 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:58.537 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:58.537 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.795 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.053 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.053 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:59.053 { 00:23:59.053 "cntlid": 13, 00:23:59.053 "qid": 0, 00:23:59.053 "state": "enabled", 00:23:59.053 "thread": "nvmf_tgt_poll_group_000", 00:23:59.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:23:59.053 "listen_address": { 00:23:59.053 "trtype": "RDMA", 00:23:59.053 "adrfam": "IPv4", 00:23:59.054 "traddr": "192.168.100.8", 00:23:59.054 "trsvcid": "4420" 00:23:59.054 }, 00:23:59.054 "peer_address": { 00:23:59.054 "trtype": "RDMA", 00:23:59.054 "adrfam": "IPv4", 00:23:59.054 "traddr": "192.168.100.8", 00:23:59.054 "trsvcid": "45930" 00:23:59.054 }, 00:23:59.054 "auth": { 00:23:59.054 "state": "completed", 00:23:59.054 "digest": "sha256", 00:23:59.054 "dhgroup": "ffdhe2048" 00:23:59.054 } 00:23:59.054 } 00:23:59.054 ]' 00:23:59.054 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:59.346 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:59.346 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:59.346 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:59.346 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:59.346 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:59.346 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.346 13:54:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.669 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:23:59.669 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:00.236 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.236 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:00.236 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.236 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.236 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.236 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:00.236 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.236 13:54:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:00.495 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:00.753 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:00.753 { 00:24:00.753 "cntlid": 15, 00:24:00.753 "qid": 0, 00:24:00.753 "state": "enabled", 00:24:00.753 "thread": "nvmf_tgt_poll_group_000", 00:24:00.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:00.753 "listen_address": { 00:24:00.753 "trtype": "RDMA", 00:24:00.753 "adrfam": "IPv4", 00:24:00.753 "traddr": "192.168.100.8", 00:24:00.753 "trsvcid": "4420" 00:24:00.753 }, 00:24:00.753 "peer_address": { 00:24:00.753 "trtype": "RDMA", 00:24:00.753 "adrfam": "IPv4", 00:24:00.753 "traddr": "192.168.100.8", 00:24:00.753 "trsvcid": "44251" 00:24:00.753 }, 00:24:00.753 "auth": { 00:24:00.753 "state": "completed", 00:24:00.753 "digest": "sha256", 00:24:00.753 "dhgroup": "ffdhe2048" 00:24:00.753 } 00:24:00.753 } 00:24:00.753 ]' 00:24:00.753 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:01.012 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:01.012 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:01.012 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:01.012 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:01.012 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.012 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.012 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.270 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:01.270 13:55:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.837 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.095 13:55:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.354 00:24:02.354 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:02.354 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:02.354 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:02.613 { 00:24:02.613 "cntlid": 17, 00:24:02.613 "qid": 0, 00:24:02.613 "state": "enabled", 00:24:02.613 "thread": "nvmf_tgt_poll_group_000", 00:24:02.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:02.613 "listen_address": { 00:24:02.613 "trtype": "RDMA", 00:24:02.613 "adrfam": "IPv4", 00:24:02.613 "traddr": "192.168.100.8", 00:24:02.613 "trsvcid": "4420" 00:24:02.613 }, 00:24:02.613 "peer_address": { 00:24:02.613 "trtype": "RDMA", 00:24:02.613 "adrfam": "IPv4", 00:24:02.613 "traddr": "192.168.100.8", 00:24:02.613 "trsvcid": "34548" 00:24:02.613 }, 00:24:02.613 "auth": { 00:24:02.613 "state": "completed", 00:24:02.613 "digest": "sha256", 00:24:02.613 "dhgroup": "ffdhe3072" 00:24:02.613 } 00:24:02.613 } 00:24:02.613 ]' 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:02.613 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.872 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:02.872 13:55:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:03.438 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:03.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:03.438 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:03.438 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.438 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.438 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.438 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:03.438 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:03.438 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:03.697 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:24:03.697 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.698 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.957 00:24:03.957 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:03.957 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:03.957 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:04.216 { 00:24:04.216 "cntlid": 19, 00:24:04.216 "qid": 0, 00:24:04.216 "state": "enabled", 00:24:04.216 "thread": "nvmf_tgt_poll_group_000", 00:24:04.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:04.216 "listen_address": { 00:24:04.216 "trtype": "RDMA", 00:24:04.216 "adrfam": "IPv4", 00:24:04.216 "traddr": "192.168.100.8", 00:24:04.216 "trsvcid": "4420" 00:24:04.216 }, 00:24:04.216 "peer_address": { 00:24:04.216 "trtype": "RDMA", 00:24:04.216 "adrfam": "IPv4", 00:24:04.216 "traddr": "192.168.100.8", 00:24:04.216 "trsvcid": "35450" 00:24:04.216 }, 00:24:04.216 "auth": { 00:24:04.216 "state": "completed", 00:24:04.216 "digest": "sha256", 00:24:04.216 "dhgroup": "ffdhe3072" 00:24:04.216 } 00:24:04.216 } 00:24:04.216 ]' 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:04.216 13:55:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:04.216 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:04.216 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:04.216 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:04.216 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.475 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:04.475 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:05.041 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.341 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:05.341 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.341 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.341 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.341 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:05.341 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:05.341 13:55:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.341 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:05.599 00:24:05.599 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:05.599 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:05.599 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:05.863 { 00:24:05.863 "cntlid": 21, 00:24:05.863 "qid": 0, 00:24:05.863 "state": "enabled", 00:24:05.863 "thread": "nvmf_tgt_poll_group_000", 00:24:05.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:05.863 "listen_address": { 00:24:05.863 "trtype": "RDMA", 00:24:05.863 "adrfam": "IPv4", 00:24:05.863 "traddr": "192.168.100.8", 00:24:05.863 "trsvcid": "4420" 00:24:05.863 }, 00:24:05.863 "peer_address": { 00:24:05.863 "trtype": "RDMA", 00:24:05.863 "adrfam": "IPv4", 00:24:05.863 "traddr": "192.168.100.8", 00:24:05.863 "trsvcid": "35589" 00:24:05.863 }, 00:24:05.863 "auth": { 00:24:05.863 "state": "completed", 00:24:05.863 "digest": "sha256", 00:24:05.863 "dhgroup": "ffdhe3072" 00:24:05.863 } 00:24:05.863 } 00:24:05.863 ]' 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.863 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.121 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:06.121 13:55:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:06.686 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:06.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:06.944 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:06.944 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.944 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.944 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.944 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.945 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.202 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.202 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:07.202 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:07.202 13:55:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:07.202 00:24:07.461 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:07.461 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:07.461 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:07.461 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.461 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:07.462 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.462 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.462 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.462 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:07.462 { 00:24:07.462 "cntlid": 23, 00:24:07.462 "qid": 0, 00:24:07.462 "state": "enabled", 00:24:07.462 "thread": "nvmf_tgt_poll_group_000", 00:24:07.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:07.462 "listen_address": { 00:24:07.462 "trtype": "RDMA", 00:24:07.462 "adrfam": "IPv4", 00:24:07.462 "traddr": "192.168.100.8", 00:24:07.462 "trsvcid": "4420" 00:24:07.462 }, 00:24:07.462 "peer_address": { 00:24:07.462 "trtype": "RDMA", 00:24:07.462 "adrfam": "IPv4", 00:24:07.462 "traddr": "192.168.100.8", 00:24:07.462 "trsvcid": "59544" 00:24:07.462 }, 00:24:07.462 "auth": { 00:24:07.462 "state": "completed", 00:24:07.462 "digest": "sha256", 00:24:07.462 "dhgroup": "ffdhe3072" 00:24:07.462 } 00:24:07.462 } 00:24:07.462 ]' 00:24:07.462 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:07.462 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:07.462 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:07.721 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:07.721 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:07.721 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:07.721 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:07.721 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.721 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:07.721 13:55:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:08.289 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:08.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:08.548 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:08.548 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.548 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.548 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.548 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.548 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:08.548 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:08.548 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.807 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.067 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:09.067 { 00:24:09.067 "cntlid": 25, 00:24:09.067 "qid": 0, 00:24:09.067 "state": "enabled", 00:24:09.067 "thread": "nvmf_tgt_poll_group_000", 00:24:09.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:09.067 "listen_address": { 00:24:09.067 "trtype": "RDMA", 00:24:09.067 "adrfam": "IPv4", 00:24:09.067 "traddr": "192.168.100.8", 00:24:09.067 "trsvcid": "4420" 00:24:09.067 }, 00:24:09.067 "peer_address": { 00:24:09.067 "trtype": "RDMA", 00:24:09.067 "adrfam": "IPv4", 00:24:09.067 "traddr": "192.168.100.8", 00:24:09.067 "trsvcid": "34325" 00:24:09.067 }, 00:24:09.067 "auth": { 00:24:09.067 "state": "completed", 00:24:09.067 "digest": "sha256", 00:24:09.067 "dhgroup": "ffdhe4096" 00:24:09.067 } 00:24:09.067 } 00:24:09.067 ]' 00:24:09.067 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:09.325 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:09.325 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:09.325 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:09.325 13:55:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:09.325 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:09.325 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:09.325 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.584 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:09.584 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:10.150 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.150 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:10.150 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.150 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.150 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.150 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:10.150 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.150 13:55:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.410 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.668 00:24:10.668 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:10.668 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:10.668 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:10.927 { 00:24:10.927 "cntlid": 27, 00:24:10.927 "qid": 0, 00:24:10.927 "state": "enabled", 00:24:10.927 "thread": "nvmf_tgt_poll_group_000", 00:24:10.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:10.927 "listen_address": { 00:24:10.927 "trtype": "RDMA", 00:24:10.927 "adrfam": "IPv4", 00:24:10.927 "traddr": "192.168.100.8", 00:24:10.927 "trsvcid": "4420" 00:24:10.927 }, 00:24:10.927 "peer_address": { 00:24:10.927 "trtype": "RDMA", 00:24:10.927 "adrfam": "IPv4", 00:24:10.927 "traddr": "192.168.100.8", 00:24:10.927 "trsvcid": "54244" 00:24:10.927 }, 00:24:10.927 "auth": { 00:24:10.927 "state": "completed", 00:24:10.927 "digest": "sha256", 00:24:10.927 "dhgroup": "ffdhe4096" 00:24:10.927 } 00:24:10.927 } 00:24:10.927 ]' 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.927 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.187 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:11.187 13:55:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:11.755 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:12.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.015 13:55:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.273 00:24:12.273 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:12.273 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:12.273 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:12.532 { 00:24:12.532 "cntlid": 29, 00:24:12.532 "qid": 0, 00:24:12.532 "state": "enabled", 00:24:12.532 "thread": "nvmf_tgt_poll_group_000", 00:24:12.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:12.532 "listen_address": { 00:24:12.532 "trtype": "RDMA", 00:24:12.532 "adrfam": "IPv4", 00:24:12.532 "traddr": "192.168.100.8", 00:24:12.532 "trsvcid": "4420" 00:24:12.532 }, 00:24:12.532 "peer_address": { 00:24:12.532 "trtype": "RDMA", 00:24:12.532 "adrfam": "IPv4", 00:24:12.532 "traddr": "192.168.100.8", 00:24:12.532 "trsvcid": "53608" 00:24:12.532 }, 00:24:12.532 "auth": { 00:24:12.532 "state": "completed", 00:24:12.532 "digest": "sha256", 00:24:12.532 "dhgroup": "ffdhe4096" 00:24:12.532 } 00:24:12.532 } 00:24:12.532 ]' 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:12.532 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:12.791 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:12.791 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:12.791 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:12.791 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:12.791 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.791 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:12.791 13:55:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:13.357 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:13.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:13.616 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:13.616 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.616 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.616 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.616 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:13.616 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.616 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.874 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:13.875 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:13.875 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:14.133 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:14.133 { 00:24:14.133 "cntlid": 31, 00:24:14.133 "qid": 0, 00:24:14.133 "state": "enabled", 00:24:14.133 "thread": "nvmf_tgt_poll_group_000", 00:24:14.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:14.133 "listen_address": { 00:24:14.133 "trtype": "RDMA", 00:24:14.133 "adrfam": "IPv4", 00:24:14.133 "traddr": "192.168.100.8", 00:24:14.133 "trsvcid": "4420" 00:24:14.133 }, 00:24:14.133 "peer_address": { 00:24:14.133 "trtype": "RDMA", 00:24:14.133 "adrfam": "IPv4", 00:24:14.133 "traddr": "192.168.100.8", 00:24:14.133 "trsvcid": "51962" 00:24:14.133 }, 00:24:14.133 "auth": { 00:24:14.133 "state": "completed", 00:24:14.133 "digest": "sha256", 00:24:14.133 "dhgroup": "ffdhe4096" 00:24:14.133 } 00:24:14.133 } 00:24:14.133 ]' 00:24:14.133 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:14.391 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:14.391 13:55:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:14.391 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:14.391 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:14.391 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.391 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.391 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.649 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:14.649 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:15.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:15.216 13:55:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.474 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.732 00:24:15.732 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:15.732 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:15.732 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:15.990 { 00:24:15.990 "cntlid": 33, 00:24:15.990 "qid": 0, 00:24:15.990 "state": "enabled", 00:24:15.990 "thread": "nvmf_tgt_poll_group_000", 00:24:15.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:15.990 "listen_address": { 00:24:15.990 "trtype": "RDMA", 00:24:15.990 "adrfam": "IPv4", 00:24:15.990 "traddr": "192.168.100.8", 00:24:15.990 "trsvcid": "4420" 00:24:15.990 }, 00:24:15.990 "peer_address": { 00:24:15.990 "trtype": "RDMA", 00:24:15.990 "adrfam": "IPv4", 00:24:15.990 "traddr": "192.168.100.8", 00:24:15.990 "trsvcid": "33982" 00:24:15.990 }, 00:24:15.990 "auth": { 00:24:15.990 "state": "completed", 00:24:15.990 "digest": "sha256", 00:24:15.990 "dhgroup": "ffdhe6144" 00:24:15.990 } 00:24:15.990 } 00:24:15.990 ]' 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.990 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.247 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:16.247 13:55:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:16.813 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.813 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:16.813 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.814 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.073 13:55:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.332 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:17.592 { 00:24:17.592 "cntlid": 35, 00:24:17.592 "qid": 0, 00:24:17.592 "state": "enabled", 00:24:17.592 "thread": "nvmf_tgt_poll_group_000", 00:24:17.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:17.592 "listen_address": { 00:24:17.592 "trtype": "RDMA", 00:24:17.592 "adrfam": "IPv4", 00:24:17.592 "traddr": "192.168.100.8", 00:24:17.592 "trsvcid": "4420" 00:24:17.592 }, 00:24:17.592 "peer_address": { 00:24:17.592 "trtype": "RDMA", 00:24:17.592 "adrfam": "IPv4", 00:24:17.592 "traddr": "192.168.100.8", 00:24:17.592 "trsvcid": "56671" 00:24:17.592 }, 00:24:17.592 "auth": { 00:24:17.592 "state": "completed", 00:24:17.592 "digest": "sha256", 00:24:17.592 "dhgroup": "ffdhe6144" 00:24:17.592 } 00:24:17.592 } 00:24:17.592 ]' 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:17.592 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:17.851 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:17.851 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:17.851 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.851 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.851 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:18.110 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:18.110 13:55:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:18.677 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.677 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:18.677 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.677 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.677 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.677 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:18.677 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.677 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.936 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.195 00:24:19.195 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:19.195 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.195 13:55:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:19.455 { 00:24:19.455 "cntlid": 37, 00:24:19.455 "qid": 0, 00:24:19.455 "state": "enabled", 00:24:19.455 "thread": "nvmf_tgt_poll_group_000", 00:24:19.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:19.455 "listen_address": { 00:24:19.455 "trtype": "RDMA", 00:24:19.455 "adrfam": "IPv4", 00:24:19.455 "traddr": "192.168.100.8", 00:24:19.455 "trsvcid": "4420" 00:24:19.455 }, 00:24:19.455 "peer_address": { 00:24:19.455 "trtype": "RDMA", 00:24:19.455 "adrfam": "IPv4", 00:24:19.455 "traddr": "192.168.100.8", 00:24:19.455 "trsvcid": "39654" 00:24:19.455 }, 00:24:19.455 "auth": { 00:24:19.455 "state": "completed", 00:24:19.455 "digest": "sha256", 00:24:19.455 "dhgroup": "ffdhe6144" 00:24:19.455 } 00:24:19.455 } 00:24:19.455 ]' 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.455 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.713 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:19.713 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:20.279 13:55:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.279 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:20.279 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.279 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.279 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.279 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:20.279 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:20.279 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:20.539 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:20.797 00:24:21.055 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:21.055 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:21.055 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.055 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.055 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:21.055 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.055 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.056 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.056 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:21.056 { 00:24:21.056 "cntlid": 39, 00:24:21.056 "qid": 0, 00:24:21.056 "state": "enabled", 00:24:21.056 "thread": "nvmf_tgt_poll_group_000", 00:24:21.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:21.056 "listen_address": { 00:24:21.056 "trtype": "RDMA", 00:24:21.056 "adrfam": "IPv4", 00:24:21.056 "traddr": "192.168.100.8", 00:24:21.056 "trsvcid": "4420" 00:24:21.056 }, 00:24:21.056 "peer_address": { 00:24:21.056 "trtype": "RDMA", 00:24:21.056 "adrfam": "IPv4", 00:24:21.056 "traddr": "192.168.100.8", 00:24:21.056 "trsvcid": "46746" 00:24:21.056 }, 00:24:21.056 "auth": { 00:24:21.056 "state": "completed", 00:24:21.056 "digest": "sha256", 00:24:21.056 "dhgroup": "ffdhe6144" 00:24:21.056 } 00:24:21.056 } 00:24:21.056 ]' 00:24:21.056 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:21.056 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:21.056 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:21.314 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:21.314 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:21.314 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:21.314 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:21.314 13:55:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.314 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:21.314 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:21.881 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:22.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:22.139 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:22.139 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.139 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.139 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.139 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:22.139 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:22.139 13:55:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.441 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.699 00:24:22.699 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:22.699 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:22.699 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:22.957 { 00:24:22.957 "cntlid": 41, 00:24:22.957 "qid": 0, 00:24:22.957 "state": "enabled", 00:24:22.957 "thread": "nvmf_tgt_poll_group_000", 00:24:22.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:22.957 "listen_address": { 00:24:22.957 "trtype": "RDMA", 00:24:22.957 "adrfam": "IPv4", 00:24:22.957 "traddr": "192.168.100.8", 00:24:22.957 "trsvcid": "4420" 00:24:22.957 }, 00:24:22.957 "peer_address": { 00:24:22.957 "trtype": "RDMA", 00:24:22.957 "adrfam": "IPv4", 00:24:22.957 "traddr": "192.168.100.8", 00:24:22.957 "trsvcid": "56813" 00:24:22.957 }, 00:24:22.957 "auth": { 00:24:22.957 "state": "completed", 00:24:22.957 "digest": "sha256", 00:24:22.957 "dhgroup": "ffdhe8192" 00:24:22.957 } 00:24:22.957 } 00:24:22.957 ]' 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:22.957 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:23.215 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:23.215 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:23.215 13:55:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:23.215 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:23.215 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:23.780 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.038 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:24.038 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.038 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.038 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.038 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:24.038 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:24.038 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.296 13:55:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.554 00:24:24.554 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:24.554 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:24.554 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.813 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.813 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:24.813 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.813 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.813 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.813 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:24.813 { 00:24:24.813 "cntlid": 43, 00:24:24.813 "qid": 0, 00:24:24.813 "state": "enabled", 00:24:24.813 "thread": "nvmf_tgt_poll_group_000", 00:24:24.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:24.813 "listen_address": { 00:24:24.813 "trtype": "RDMA", 00:24:24.813 "adrfam": "IPv4", 00:24:24.813 "traddr": "192.168.100.8", 00:24:24.813 "trsvcid": "4420" 00:24:24.813 }, 00:24:24.813 "peer_address": { 00:24:24.813 "trtype": "RDMA", 00:24:24.813 "adrfam": "IPv4", 00:24:24.813 "traddr": "192.168.100.8", 00:24:24.813 "trsvcid": "37462" 00:24:24.813 }, 00:24:24.813 "auth": { 00:24:24.813 "state": "completed", 00:24:24.813 "digest": "sha256", 00:24:24.813 "dhgroup": "ffdhe8192" 00:24:24.813 } 00:24:24.813 } 00:24:24.813 ]' 00:24:24.814 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:24.814 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:24.814 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:24.814 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:24.814 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:25.088 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.088 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.088 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.088 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:25.088 13:55:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:25.655 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:25.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:25.913 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:25.913 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.913 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.913 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.913 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:25.913 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:25.913 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.171 13:55:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.430 00:24:26.430 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:26.430 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:26.430 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:26.689 { 00:24:26.689 "cntlid": 45, 00:24:26.689 "qid": 0, 00:24:26.689 "state": "enabled", 00:24:26.689 "thread": "nvmf_tgt_poll_group_000", 00:24:26.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:26.689 "listen_address": { 00:24:26.689 "trtype": "RDMA", 00:24:26.689 "adrfam": "IPv4", 00:24:26.689 "traddr": "192.168.100.8", 00:24:26.689 "trsvcid": "4420" 00:24:26.689 }, 00:24:26.689 "peer_address": { 00:24:26.689 "trtype": "RDMA", 00:24:26.689 "adrfam": "IPv4", 00:24:26.689 "traddr": "192.168.100.8", 00:24:26.689 "trsvcid": "42176" 00:24:26.689 }, 00:24:26.689 "auth": { 00:24:26.689 "state": "completed", 00:24:26.689 "digest": "sha256", 00:24:26.689 "dhgroup": "ffdhe8192" 00:24:26.689 } 00:24:26.689 } 00:24:26.689 ]' 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:26.689 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:26.948 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:26.948 13:55:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:27.515 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.773 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:27.774 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:27.774 13:55:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:28.340 00:24:28.340 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:28.340 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:28.340 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:28.600 { 00:24:28.600 "cntlid": 47, 00:24:28.600 "qid": 0, 00:24:28.600 "state": "enabled", 00:24:28.600 "thread": "nvmf_tgt_poll_group_000", 00:24:28.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:28.600 "listen_address": { 00:24:28.600 "trtype": "RDMA", 00:24:28.600 "adrfam": "IPv4", 00:24:28.600 "traddr": "192.168.100.8", 00:24:28.600 "trsvcid": "4420" 00:24:28.600 }, 00:24:28.600 "peer_address": { 00:24:28.600 "trtype": "RDMA", 00:24:28.600 "adrfam": "IPv4", 00:24:28.600 "traddr": "192.168.100.8", 00:24:28.600 "trsvcid": "53549" 00:24:28.600 }, 00:24:28.600 "auth": { 00:24:28.600 "state": "completed", 00:24:28.600 "digest": "sha256", 00:24:28.600 "dhgroup": "ffdhe8192" 00:24:28.600 } 00:24:28.600 } 00:24:28.600 ]' 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:28.600 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:28.858 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:28.858 13:55:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:29.425 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.684 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.685 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.943 00:24:29.943 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:29.943 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:29.943 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:30.202 { 00:24:30.202 "cntlid": 49, 00:24:30.202 "qid": 0, 00:24:30.202 "state": "enabled", 00:24:30.202 "thread": "nvmf_tgt_poll_group_000", 00:24:30.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:30.202 "listen_address": { 00:24:30.202 "trtype": "RDMA", 00:24:30.202 "adrfam": "IPv4", 00:24:30.202 "traddr": "192.168.100.8", 00:24:30.202 "trsvcid": "4420" 00:24:30.202 }, 00:24:30.202 "peer_address": { 00:24:30.202 "trtype": "RDMA", 00:24:30.202 "adrfam": "IPv4", 00:24:30.202 "traddr": "192.168.100.8", 00:24:30.202 "trsvcid": "59841" 00:24:30.202 }, 00:24:30.202 "auth": { 00:24:30.202 "state": "completed", 00:24:30.202 "digest": "sha384", 00:24:30.202 "dhgroup": "null" 00:24:30.202 } 00:24:30.202 } 00:24:30.202 ]' 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:30.202 13:55:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:30.202 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:30.202 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.202 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.202 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.461 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:30.461 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:31.027 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.284 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:31.284 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.284 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.284 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.284 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:31.284 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:31.284 13:55:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.542 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.542 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:31.800 { 00:24:31.800 "cntlid": 51, 00:24:31.800 "qid": 0, 00:24:31.800 "state": "enabled", 00:24:31.800 "thread": "nvmf_tgt_poll_group_000", 00:24:31.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:31.800 "listen_address": { 00:24:31.800 "trtype": "RDMA", 00:24:31.800 "adrfam": "IPv4", 00:24:31.800 "traddr": "192.168.100.8", 00:24:31.800 "trsvcid": "4420" 00:24:31.800 }, 00:24:31.800 "peer_address": { 00:24:31.800 "trtype": "RDMA", 00:24:31.800 "adrfam": "IPv4", 00:24:31.800 "traddr": "192.168.100.8", 00:24:31.800 "trsvcid": "37576" 00:24:31.800 }, 00:24:31.800 "auth": { 00:24:31.800 "state": "completed", 00:24:31.800 "digest": "sha384", 00:24:31.800 "dhgroup": "null" 00:24:31.800 } 00:24:31.800 } 00:24:31.800 ]' 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:31.800 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:32.057 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:32.057 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:32.057 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.057 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.057 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.057 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:32.057 13:55:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:32.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.989 13:55:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.247 00:24:33.247 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:33.247 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:33.247 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:33.505 { 00:24:33.505 "cntlid": 53, 00:24:33.505 "qid": 0, 00:24:33.505 "state": "enabled", 00:24:33.505 "thread": "nvmf_tgt_poll_group_000", 00:24:33.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:33.505 "listen_address": { 00:24:33.505 "trtype": "RDMA", 00:24:33.505 "adrfam": "IPv4", 00:24:33.505 "traddr": "192.168.100.8", 00:24:33.505 "trsvcid": "4420" 00:24:33.505 }, 00:24:33.505 "peer_address": { 00:24:33.505 "trtype": "RDMA", 00:24:33.505 "adrfam": "IPv4", 00:24:33.505 "traddr": "192.168.100.8", 00:24:33.505 "trsvcid": "39703" 00:24:33.505 }, 00:24:33.505 "auth": { 00:24:33.505 "state": "completed", 00:24:33.505 "digest": "sha384", 00:24:33.505 "dhgroup": "null" 00:24:33.505 } 00:24:33.505 } 00:24:33.505 ]' 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:33.505 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:33.764 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:33.764 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:33.764 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:33.764 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:33.764 13:55:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:34.332 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:34.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:34.590 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:34.590 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.590 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.590 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.590 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:34.590 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:34.590 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:34.848 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:24:34.848 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:34.848 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:34.848 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:24:34.848 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:34.848 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:34.848 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:34.848 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.849 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.849 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.849 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:34.849 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:34.849 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:35.107 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:35.107 { 00:24:35.107 "cntlid": 55, 00:24:35.107 "qid": 0, 00:24:35.107 "state": "enabled", 00:24:35.107 "thread": "nvmf_tgt_poll_group_000", 00:24:35.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:35.107 "listen_address": { 00:24:35.107 "trtype": "RDMA", 00:24:35.107 "adrfam": "IPv4", 00:24:35.107 "traddr": "192.168.100.8", 00:24:35.107 "trsvcid": "4420" 00:24:35.107 }, 00:24:35.107 "peer_address": { 00:24:35.107 "trtype": "RDMA", 00:24:35.107 "adrfam": "IPv4", 00:24:35.107 "traddr": "192.168.100.8", 00:24:35.107 "trsvcid": "35118" 00:24:35.107 }, 00:24:35.107 "auth": { 00:24:35.107 "state": "completed", 00:24:35.107 "digest": "sha384", 00:24:35.107 "dhgroup": "null" 00:24:35.107 } 00:24:35.107 } 00:24:35.107 ]' 00:24:35.107 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:35.366 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:35.366 13:55:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:35.366 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:35.366 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:35.366 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:35.366 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:35.366 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:35.624 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:35.624 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:36.191 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:36.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:36.191 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:36.191 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.191 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.191 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.192 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.192 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:36.192 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:36.192 13:55:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.451 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.709 00:24:36.709 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:36.709 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:36.709 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:36.709 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.709 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:36.709 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.709 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:36.968 { 00:24:36.968 "cntlid": 57, 00:24:36.968 "qid": 0, 00:24:36.968 "state": "enabled", 00:24:36.968 "thread": "nvmf_tgt_poll_group_000", 00:24:36.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:36.968 "listen_address": { 00:24:36.968 "trtype": "RDMA", 00:24:36.968 "adrfam": "IPv4", 00:24:36.968 "traddr": "192.168.100.8", 00:24:36.968 "trsvcid": "4420" 00:24:36.968 }, 00:24:36.968 "peer_address": { 00:24:36.968 "trtype": "RDMA", 00:24:36.968 "adrfam": "IPv4", 00:24:36.968 "traddr": "192.168.100.8", 00:24:36.968 "trsvcid": "55422" 00:24:36.968 }, 00:24:36.968 "auth": { 00:24:36.968 "state": "completed", 00:24:36.968 "digest": "sha384", 00:24:36.968 "dhgroup": "ffdhe2048" 00:24:36.968 } 00:24:36.968 } 00:24:36.968 ]' 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:36.968 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.273 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:37.273 13:55:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:37.903 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:37.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:37.903 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:37.903 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.903 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.903 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.903 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:37.903 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:37.903 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:38.161 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:24:38.161 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:38.161 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:38.161 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:38.161 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:38.161 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:38.162 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.162 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.162 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.162 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.162 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.162 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.162 13:55:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.162 00:24:38.162 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:38.162 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:38.162 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:38.420 { 00:24:38.420 "cntlid": 59, 00:24:38.420 "qid": 0, 00:24:38.420 "state": "enabled", 00:24:38.420 "thread": "nvmf_tgt_poll_group_000", 00:24:38.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:38.420 "listen_address": { 00:24:38.420 "trtype": "RDMA", 00:24:38.420 "adrfam": "IPv4", 00:24:38.420 "traddr": "192.168.100.8", 00:24:38.420 "trsvcid": "4420" 00:24:38.420 }, 00:24:38.420 "peer_address": { 00:24:38.420 "trtype": "RDMA", 00:24:38.420 "adrfam": "IPv4", 00:24:38.420 "traddr": "192.168.100.8", 00:24:38.420 "trsvcid": "58793" 00:24:38.420 }, 00:24:38.420 "auth": { 00:24:38.420 "state": "completed", 00:24:38.420 "digest": "sha384", 00:24:38.420 "dhgroup": "ffdhe2048" 00:24:38.420 } 00:24:38.420 } 00:24:38.420 ]' 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:38.420 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:38.679 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:38.679 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:38.679 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:38.679 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:38.679 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:38.679 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:38.679 13:55:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:39.246 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:39.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:39.507 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:39.507 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.507 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.507 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.507 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:39.507 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:39.507 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.766 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.024 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:40.024 { 00:24:40.024 "cntlid": 61, 00:24:40.024 "qid": 0, 00:24:40.024 "state": "enabled", 00:24:40.024 "thread": "nvmf_tgt_poll_group_000", 00:24:40.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:40.024 "listen_address": { 00:24:40.024 "trtype": "RDMA", 00:24:40.024 "adrfam": "IPv4", 00:24:40.024 "traddr": "192.168.100.8", 00:24:40.024 "trsvcid": "4420" 00:24:40.024 }, 00:24:40.024 "peer_address": { 00:24:40.024 "trtype": "RDMA", 00:24:40.024 "adrfam": "IPv4", 00:24:40.024 "traddr": "192.168.100.8", 00:24:40.024 "trsvcid": "35564" 00:24:40.024 }, 00:24:40.024 "auth": { 00:24:40.024 "state": "completed", 00:24:40.024 "digest": "sha384", 00:24:40.024 "dhgroup": "ffdhe2048" 00:24:40.024 } 00:24:40.024 } 00:24:40.024 ]' 00:24:40.024 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:40.282 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:40.282 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:40.282 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:40.282 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:40.282 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:40.282 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:40.282 13:55:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:40.540 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:40.540 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:41.104 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:41.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:41.104 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:41.104 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.104 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.104 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.104 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:41.104 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:41.104 13:55:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.362 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:41.363 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:41.363 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:41.620 00:24:41.620 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:41.620 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:41.620 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:41.879 { 00:24:41.879 "cntlid": 63, 00:24:41.879 "qid": 0, 00:24:41.879 "state": "enabled", 00:24:41.879 "thread": "nvmf_tgt_poll_group_000", 00:24:41.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:41.879 "listen_address": { 00:24:41.879 "trtype": "RDMA", 00:24:41.879 "adrfam": "IPv4", 00:24:41.879 "traddr": "192.168.100.8", 00:24:41.879 "trsvcid": "4420" 00:24:41.879 }, 00:24:41.879 "peer_address": { 00:24:41.879 "trtype": "RDMA", 00:24:41.879 "adrfam": "IPv4", 00:24:41.879 "traddr": "192.168.100.8", 00:24:41.879 "trsvcid": "59076" 00:24:41.879 }, 00:24:41.879 "auth": { 00:24:41.879 "state": "completed", 00:24:41.879 "digest": "sha384", 00:24:41.879 "dhgroup": "ffdhe2048" 00:24:41.879 } 00:24:41.879 } 00:24:41.879 ]' 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:41.879 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.138 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:42.138 13:55:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:42.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:42.707 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.965 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.966 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.966 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.966 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.966 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.225 00:24:43.225 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:43.225 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:43.225 13:55:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.483 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.483 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:43.483 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.483 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.483 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.483 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:43.483 { 00:24:43.483 "cntlid": 65, 00:24:43.483 "qid": 0, 00:24:43.483 "state": "enabled", 00:24:43.483 "thread": "nvmf_tgt_poll_group_000", 00:24:43.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:43.483 "listen_address": { 00:24:43.483 "trtype": "RDMA", 00:24:43.483 "adrfam": "IPv4", 00:24:43.483 "traddr": "192.168.100.8", 00:24:43.483 "trsvcid": "4420" 00:24:43.483 }, 00:24:43.483 "peer_address": { 00:24:43.483 "trtype": "RDMA", 00:24:43.483 "adrfam": "IPv4", 00:24:43.483 "traddr": "192.168.100.8", 00:24:43.483 "trsvcid": "46867" 00:24:43.483 }, 00:24:43.483 "auth": { 00:24:43.483 "state": "completed", 00:24:43.484 "digest": "sha384", 00:24:43.484 "dhgroup": "ffdhe3072" 00:24:43.484 } 00:24:43.484 } 00:24:43.484 ]' 00:24:43.484 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:43.484 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:43.484 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:43.484 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:43.484 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:43.484 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:43.484 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:43.484 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:43.742 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:43.742 13:55:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:44.309 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:44.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:44.309 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:44.309 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.309 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.309 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.309 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:44.309 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:44.309 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.568 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.826 00:24:44.826 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:44.826 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:44.826 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:45.086 { 00:24:45.086 "cntlid": 67, 00:24:45.086 "qid": 0, 00:24:45.086 "state": "enabled", 00:24:45.086 "thread": "nvmf_tgt_poll_group_000", 00:24:45.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:45.086 "listen_address": { 00:24:45.086 "trtype": "RDMA", 00:24:45.086 "adrfam": "IPv4", 00:24:45.086 "traddr": "192.168.100.8", 00:24:45.086 "trsvcid": "4420" 00:24:45.086 }, 00:24:45.086 "peer_address": { 00:24:45.086 "trtype": "RDMA", 00:24:45.086 "adrfam": "IPv4", 00:24:45.086 "traddr": "192.168.100.8", 00:24:45.086 "trsvcid": "54464" 00:24:45.086 }, 00:24:45.086 "auth": { 00:24:45.086 "state": "completed", 00:24:45.086 "digest": "sha384", 00:24:45.086 "dhgroup": "ffdhe3072" 00:24:45.086 } 00:24:45.086 } 00:24:45.086 ]' 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:45.086 13:55:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:45.345 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:45.345 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:45.910 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:45.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:45.910 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:45.910 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.910 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.910 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.910 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:45.910 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:45.910 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.169 13:55:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.427 00:24:46.427 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:46.427 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:46.427 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:46.686 { 00:24:46.686 "cntlid": 69, 00:24:46.686 "qid": 0, 00:24:46.686 "state": "enabled", 00:24:46.686 "thread": "nvmf_tgt_poll_group_000", 00:24:46.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:46.686 "listen_address": { 00:24:46.686 "trtype": "RDMA", 00:24:46.686 "adrfam": "IPv4", 00:24:46.686 "traddr": "192.168.100.8", 00:24:46.686 "trsvcid": "4420" 00:24:46.686 }, 00:24:46.686 "peer_address": { 00:24:46.686 "trtype": "RDMA", 00:24:46.686 "adrfam": "IPv4", 00:24:46.686 "traddr": "192.168.100.8", 00:24:46.686 "trsvcid": "48124" 00:24:46.686 }, 00:24:46.686 "auth": { 00:24:46.686 "state": "completed", 00:24:46.686 "digest": "sha384", 00:24:46.686 "dhgroup": "ffdhe3072" 00:24:46.686 } 00:24:46.686 } 00:24:46.686 ]' 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:46.686 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.944 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:46.944 13:55:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:47.534 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:47.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:47.534 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:47.534 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.534 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.534 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.534 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:47.534 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.534 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:47.793 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:48.052 00:24:48.052 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:48.052 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:48.052 13:55:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:48.312 { 00:24:48.312 "cntlid": 71, 00:24:48.312 "qid": 0, 00:24:48.312 "state": "enabled", 00:24:48.312 "thread": "nvmf_tgt_poll_group_000", 00:24:48.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:48.312 "listen_address": { 00:24:48.312 "trtype": "RDMA", 00:24:48.312 "adrfam": "IPv4", 00:24:48.312 "traddr": "192.168.100.8", 00:24:48.312 "trsvcid": "4420" 00:24:48.312 }, 00:24:48.312 "peer_address": { 00:24:48.312 "trtype": "RDMA", 00:24:48.312 "adrfam": "IPv4", 00:24:48.312 "traddr": "192.168.100.8", 00:24:48.312 "trsvcid": "46170" 00:24:48.312 }, 00:24:48.312 "auth": { 00:24:48.312 "state": "completed", 00:24:48.312 "digest": "sha384", 00:24:48.312 "dhgroup": "ffdhe3072" 00:24:48.312 } 00:24:48.312 } 00:24:48.312 ]' 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:48.312 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:48.572 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:48.572 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:49.140 13:55:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:49.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.399 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.659 00:24:49.659 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:49.659 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:49.659 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:49.918 { 00:24:49.918 "cntlid": 73, 00:24:49.918 "qid": 0, 00:24:49.918 "state": "enabled", 00:24:49.918 "thread": "nvmf_tgt_poll_group_000", 00:24:49.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:49.918 "listen_address": { 00:24:49.918 "trtype": "RDMA", 00:24:49.918 "adrfam": "IPv4", 00:24:49.918 "traddr": "192.168.100.8", 00:24:49.918 "trsvcid": "4420" 00:24:49.918 }, 00:24:49.918 "peer_address": { 00:24:49.918 "trtype": "RDMA", 00:24:49.918 "adrfam": "IPv4", 00:24:49.918 "traddr": "192.168.100.8", 00:24:49.918 "trsvcid": "54323" 00:24:49.918 }, 00:24:49.918 "auth": { 00:24:49.918 "state": "completed", 00:24:49.918 "digest": "sha384", 00:24:49.918 "dhgroup": "ffdhe4096" 00:24:49.918 } 00:24:49.918 } 00:24:49.918 ]' 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:49.918 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:50.176 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:50.177 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:50.177 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:50.177 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:50.177 13:55:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:50.742 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:51.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:51.000 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:51.000 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.000 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.000 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.000 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:51.000 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.000 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.258 13:55:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.517 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:51.517 { 00:24:51.517 "cntlid": 75, 00:24:51.517 "qid": 0, 00:24:51.517 "state": "enabled", 00:24:51.517 "thread": "nvmf_tgt_poll_group_000", 00:24:51.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:51.517 "listen_address": { 00:24:51.517 "trtype": "RDMA", 00:24:51.517 "adrfam": "IPv4", 00:24:51.517 "traddr": "192.168.100.8", 00:24:51.517 "trsvcid": "4420" 00:24:51.517 }, 00:24:51.517 "peer_address": { 00:24:51.517 "trtype": "RDMA", 00:24:51.517 "adrfam": "IPv4", 00:24:51.517 "traddr": "192.168.100.8", 00:24:51.517 "trsvcid": "51990" 00:24:51.517 }, 00:24:51.517 "auth": { 00:24:51.517 "state": "completed", 00:24:51.517 "digest": "sha384", 00:24:51.517 "dhgroup": "ffdhe4096" 00:24:51.517 } 00:24:51.517 } 00:24:51.517 ]' 00:24:51.517 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:51.776 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:51.776 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:51.776 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:51.776 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:51.776 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:51.776 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:51.776 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:52.036 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:52.036 13:55:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:52.601 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:52.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:52.601 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:52.601 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.601 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.601 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.601 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:52.601 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:52.601 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.859 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.117 00:24:53.117 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:53.117 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:53.117 13:55:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:53.375 { 00:24:53.375 "cntlid": 77, 00:24:53.375 "qid": 0, 00:24:53.375 "state": "enabled", 00:24:53.375 "thread": "nvmf_tgt_poll_group_000", 00:24:53.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:53.375 "listen_address": { 00:24:53.375 "trtype": "RDMA", 00:24:53.375 "adrfam": "IPv4", 00:24:53.375 "traddr": "192.168.100.8", 00:24:53.375 "trsvcid": "4420" 00:24:53.375 }, 00:24:53.375 "peer_address": { 00:24:53.375 "trtype": "RDMA", 00:24:53.375 "adrfam": "IPv4", 00:24:53.375 "traddr": "192.168.100.8", 00:24:53.375 "trsvcid": "50664" 00:24:53.375 }, 00:24:53.375 "auth": { 00:24:53.375 "state": "completed", 00:24:53.375 "digest": "sha384", 00:24:53.375 "dhgroup": "ffdhe4096" 00:24:53.375 } 00:24:53.375 } 00:24:53.375 ]' 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:53.375 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:53.634 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:53.634 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:24:54.202 13:55:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:54.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:54.202 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:54.462 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:54.721 00:24:54.721 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:54.721 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:54.721 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:54.980 { 00:24:54.980 "cntlid": 79, 00:24:54.980 "qid": 0, 00:24:54.980 "state": "enabled", 00:24:54.980 "thread": "nvmf_tgt_poll_group_000", 00:24:54.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:54.980 "listen_address": { 00:24:54.980 "trtype": "RDMA", 00:24:54.980 "adrfam": "IPv4", 00:24:54.980 "traddr": "192.168.100.8", 00:24:54.980 "trsvcid": "4420" 00:24:54.980 }, 00:24:54.980 "peer_address": { 00:24:54.980 "trtype": "RDMA", 00:24:54.980 "adrfam": "IPv4", 00:24:54.980 "traddr": "192.168.100.8", 00:24:54.980 "trsvcid": "38330" 00:24:54.980 }, 00:24:54.980 "auth": { 00:24:54.980 "state": "completed", 00:24:54.980 "digest": "sha384", 00:24:54.980 "dhgroup": "ffdhe4096" 00:24:54.980 } 00:24:54.980 } 00:24:54.980 ]' 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:54.980 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:55.239 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:55.239 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:55.239 13:55:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:55.239 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:55.239 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:24:55.805 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:56.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:56.063 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:56.063 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.063 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.063 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.063 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:56.063 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:56.063 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:56.063 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.323 13:55:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.582 00:24:56.582 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:56.582 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:56.582 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:56.841 { 00:24:56.841 "cntlid": 81, 00:24:56.841 "qid": 0, 00:24:56.841 "state": "enabled", 00:24:56.841 "thread": "nvmf_tgt_poll_group_000", 00:24:56.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:56.841 "listen_address": { 00:24:56.841 "trtype": "RDMA", 00:24:56.841 "adrfam": "IPv4", 00:24:56.841 "traddr": "192.168.100.8", 00:24:56.841 "trsvcid": "4420" 00:24:56.841 }, 00:24:56.841 "peer_address": { 00:24:56.841 "trtype": "RDMA", 00:24:56.841 "adrfam": "IPv4", 00:24:56.841 "traddr": "192.168.100.8", 00:24:56.841 "trsvcid": "50381" 00:24:56.841 }, 00:24:56.841 "auth": { 00:24:56.841 "state": "completed", 00:24:56.841 "digest": "sha384", 00:24:56.841 "dhgroup": "ffdhe6144" 00:24:56.841 } 00:24:56.841 } 00:24:56.841 ]' 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:56.841 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:57.099 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:57.099 13:55:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:24:57.668 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:57.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:57.668 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:57.668 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.668 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.669 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.669 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:57.669 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:57.669 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.928 13:55:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.187 00:24:58.187 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:58.187 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:58.187 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:58.446 { 00:24:58.446 "cntlid": 83, 00:24:58.446 "qid": 0, 00:24:58.446 "state": "enabled", 00:24:58.446 "thread": "nvmf_tgt_poll_group_000", 00:24:58.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:24:58.446 "listen_address": { 00:24:58.446 "trtype": "RDMA", 00:24:58.446 "adrfam": "IPv4", 00:24:58.446 "traddr": "192.168.100.8", 00:24:58.446 "trsvcid": "4420" 00:24:58.446 }, 00:24:58.446 "peer_address": { 00:24:58.446 "trtype": "RDMA", 00:24:58.446 "adrfam": "IPv4", 00:24:58.446 "traddr": "192.168.100.8", 00:24:58.446 "trsvcid": "35454" 00:24:58.446 }, 00:24:58.446 "auth": { 00:24:58.446 "state": "completed", 00:24:58.446 "digest": "sha384", 00:24:58.446 "dhgroup": "ffdhe6144" 00:24:58.446 } 00:24:58.446 } 00:24:58.446 ]' 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:58.446 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:58.704 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:58.704 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:58.704 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:58.704 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:58.704 13:55:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:59.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.642 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.900 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:00.160 { 00:25:00.160 "cntlid": 85, 00:25:00.160 "qid": 0, 00:25:00.160 "state": "enabled", 00:25:00.160 "thread": "nvmf_tgt_poll_group_000", 00:25:00.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:00.160 "listen_address": { 00:25:00.160 "trtype": "RDMA", 00:25:00.160 "adrfam": "IPv4", 00:25:00.160 "traddr": "192.168.100.8", 00:25:00.160 "trsvcid": "4420" 00:25:00.160 }, 00:25:00.160 "peer_address": { 00:25:00.160 "trtype": "RDMA", 00:25:00.160 "adrfam": "IPv4", 00:25:00.160 "traddr": "192.168.100.8", 00:25:00.160 "trsvcid": "42759" 00:25:00.160 }, 00:25:00.160 "auth": { 00:25:00.160 "state": "completed", 00:25:00.160 "digest": "sha384", 00:25:00.160 "dhgroup": "ffdhe6144" 00:25:00.160 } 00:25:00.160 } 00:25:00.160 ]' 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:00.160 13:55:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:00.419 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:00.419 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:00.419 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:00.419 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:00.419 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:00.419 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:00.419 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:01.353 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:01.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:01.353 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:01.353 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.353 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.353 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.353 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:01.353 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:01.353 13:56:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:01.353 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:01.919 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:01.919 { 00:25:01.919 "cntlid": 87, 00:25:01.919 "qid": 0, 00:25:01.919 "state": "enabled", 00:25:01.919 "thread": "nvmf_tgt_poll_group_000", 00:25:01.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:01.919 "listen_address": { 00:25:01.919 "trtype": "RDMA", 00:25:01.919 "adrfam": "IPv4", 00:25:01.919 "traddr": "192.168.100.8", 00:25:01.919 "trsvcid": "4420" 00:25:01.919 }, 00:25:01.919 "peer_address": { 00:25:01.919 "trtype": "RDMA", 00:25:01.919 "adrfam": "IPv4", 00:25:01.919 "traddr": "192.168.100.8", 00:25:01.919 "trsvcid": "41622" 00:25:01.919 }, 00:25:01.919 "auth": { 00:25:01.919 "state": "completed", 00:25:01.919 "digest": "sha384", 00:25:01.919 "dhgroup": "ffdhe6144" 00:25:01.919 } 00:25:01.919 } 00:25:01.919 ]' 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:01.919 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:02.178 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:02.178 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:02.178 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:02.178 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:02.178 13:56:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:02.178 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:02.178 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:02.746 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:03.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:03.005 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:03.005 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.005 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.005 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.005 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.005 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:03.005 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.005 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.264 13:56:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.524 00:25:03.524 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:03.524 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:03.524 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:03.783 { 00:25:03.783 "cntlid": 89, 00:25:03.783 "qid": 0, 00:25:03.783 "state": "enabled", 00:25:03.783 "thread": "nvmf_tgt_poll_group_000", 00:25:03.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:03.783 "listen_address": { 00:25:03.783 "trtype": "RDMA", 00:25:03.783 "adrfam": "IPv4", 00:25:03.783 "traddr": "192.168.100.8", 00:25:03.783 "trsvcid": "4420" 00:25:03.783 }, 00:25:03.783 "peer_address": { 00:25:03.783 "trtype": "RDMA", 00:25:03.783 "adrfam": "IPv4", 00:25:03.783 "traddr": "192.168.100.8", 00:25:03.783 "trsvcid": "39281" 00:25:03.783 }, 00:25:03.783 "auth": { 00:25:03.783 "state": "completed", 00:25:03.783 "digest": "sha384", 00:25:03.783 "dhgroup": "ffdhe8192" 00:25:03.783 } 00:25:03.783 } 00:25:03.783 ]' 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:03.783 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:04.043 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:04.043 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:04.043 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:04.043 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:04.043 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:04.043 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:04.043 13:56:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:04.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:04.980 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:04.981 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.981 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.981 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.981 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.981 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.981 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.981 13:56:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.548 00:25:05.548 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:05.548 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:05.548 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:05.548 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.548 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:05.548 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.548 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:05.807 { 00:25:05.807 "cntlid": 91, 00:25:05.807 "qid": 0, 00:25:05.807 "state": "enabled", 00:25:05.807 "thread": "nvmf_tgt_poll_group_000", 00:25:05.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:05.807 "listen_address": { 00:25:05.807 "trtype": "RDMA", 00:25:05.807 "adrfam": "IPv4", 00:25:05.807 "traddr": "192.168.100.8", 00:25:05.807 "trsvcid": "4420" 00:25:05.807 }, 00:25:05.807 "peer_address": { 00:25:05.807 "trtype": "RDMA", 00:25:05.807 "adrfam": "IPv4", 00:25:05.807 "traddr": "192.168.100.8", 00:25:05.807 "trsvcid": "43894" 00:25:05.807 }, 00:25:05.807 "auth": { 00:25:05.807 "state": "completed", 00:25:05.807 "digest": "sha384", 00:25:05.807 "dhgroup": "ffdhe8192" 00:25:05.807 } 00:25:05.807 } 00:25:05.807 ]' 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:05.807 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:06.065 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:06.065 13:56:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:06.633 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:06.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:06.633 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:06.633 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.633 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.633 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.633 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:06.633 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.633 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.890 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.891 13:56:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.456 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:07.456 { 00:25:07.456 "cntlid": 93, 00:25:07.456 "qid": 0, 00:25:07.456 "state": "enabled", 00:25:07.456 "thread": "nvmf_tgt_poll_group_000", 00:25:07.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:07.456 "listen_address": { 00:25:07.456 "trtype": "RDMA", 00:25:07.456 "adrfam": "IPv4", 00:25:07.456 "traddr": "192.168.100.8", 00:25:07.456 "trsvcid": "4420" 00:25:07.456 }, 00:25:07.456 "peer_address": { 00:25:07.456 "trtype": "RDMA", 00:25:07.456 "adrfam": "IPv4", 00:25:07.456 "traddr": "192.168.100.8", 00:25:07.456 "trsvcid": "55097" 00:25:07.456 }, 00:25:07.456 "auth": { 00:25:07.456 "state": "completed", 00:25:07.456 "digest": "sha384", 00:25:07.456 "dhgroup": "ffdhe8192" 00:25:07.456 } 00:25:07.456 } 00:25:07.456 ]' 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:07.456 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:07.714 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:07.714 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:07.714 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:07.714 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:07.714 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:07.714 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:07.714 13:56:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:08.308 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:08.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:08.566 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:08.566 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.566 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.566 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.566 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:08.566 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:08.566 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:08.822 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:09.079 00:25:09.337 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:09.337 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:09.338 13:56:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:09.338 { 00:25:09.338 "cntlid": 95, 00:25:09.338 "qid": 0, 00:25:09.338 "state": "enabled", 00:25:09.338 "thread": "nvmf_tgt_poll_group_000", 00:25:09.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:09.338 "listen_address": { 00:25:09.338 "trtype": "RDMA", 00:25:09.338 "adrfam": "IPv4", 00:25:09.338 "traddr": "192.168.100.8", 00:25:09.338 "trsvcid": "4420" 00:25:09.338 }, 00:25:09.338 "peer_address": { 00:25:09.338 "trtype": "RDMA", 00:25:09.338 "adrfam": "IPv4", 00:25:09.338 "traddr": "192.168.100.8", 00:25:09.338 "trsvcid": "53112" 00:25:09.338 }, 00:25:09.338 "auth": { 00:25:09.338 "state": "completed", 00:25:09.338 "digest": "sha384", 00:25:09.338 "dhgroup": "ffdhe8192" 00:25:09.338 } 00:25:09.338 } 00:25:09.338 ]' 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:09.338 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:09.596 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:09.596 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:09.596 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:09.596 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:09.596 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:09.854 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:09.854 13:56:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:10.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:10.420 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.679 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.938 00:25:10.938 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:10.938 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:10.938 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:10.938 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.938 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:10.938 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.938 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.197 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:11.198 { 00:25:11.198 "cntlid": 97, 00:25:11.198 "qid": 0, 00:25:11.198 "state": "enabled", 00:25:11.198 "thread": "nvmf_tgt_poll_group_000", 00:25:11.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:11.198 "listen_address": { 00:25:11.198 "trtype": "RDMA", 00:25:11.198 "adrfam": "IPv4", 00:25:11.198 "traddr": "192.168.100.8", 00:25:11.198 "trsvcid": "4420" 00:25:11.198 }, 00:25:11.198 "peer_address": { 00:25:11.198 "trtype": "RDMA", 00:25:11.198 "adrfam": "IPv4", 00:25:11.198 "traddr": "192.168.100.8", 00:25:11.198 "trsvcid": "56635" 00:25:11.198 }, 00:25:11.198 "auth": { 00:25:11.198 "state": "completed", 00:25:11.198 "digest": "sha512", 00:25:11.198 "dhgroup": "null" 00:25:11.198 } 00:25:11.198 } 00:25:11.198 ]' 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:11.198 13:56:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:11.457 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:11.457 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:12.027 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:12.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:12.027 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:12.027 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.027 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.027 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.027 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:12.027 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:12.027 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.287 13:56:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.287 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.287 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.287 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.287 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.546 00:25:12.546 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:12.546 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:12.546 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:12.805 { 00:25:12.805 "cntlid": 99, 00:25:12.805 "qid": 0, 00:25:12.805 "state": "enabled", 00:25:12.805 "thread": "nvmf_tgt_poll_group_000", 00:25:12.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:12.805 "listen_address": { 00:25:12.805 "trtype": "RDMA", 00:25:12.805 "adrfam": "IPv4", 00:25:12.805 "traddr": "192.168.100.8", 00:25:12.805 "trsvcid": "4420" 00:25:12.805 }, 00:25:12.805 "peer_address": { 00:25:12.805 "trtype": "RDMA", 00:25:12.805 "adrfam": "IPv4", 00:25:12.805 "traddr": "192.168.100.8", 00:25:12.805 "trsvcid": "44944" 00:25:12.805 }, 00:25:12.805 "auth": { 00:25:12.805 "state": "completed", 00:25:12.805 "digest": "sha512", 00:25:12.805 "dhgroup": "null" 00:25:12.805 } 00:25:12.805 } 00:25:12.805 ]' 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:12.805 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:13.065 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:13.065 13:56:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:13.633 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:13.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:13.633 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:13.633 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.633 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.633 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.633 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:13.633 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:13.633 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.893 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.153 00:25:14.153 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:14.153 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:14.153 13:56:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:14.412 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.412 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:14.412 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.412 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.412 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.412 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:14.412 { 00:25:14.412 "cntlid": 101, 00:25:14.412 "qid": 0, 00:25:14.412 "state": "enabled", 00:25:14.412 "thread": "nvmf_tgt_poll_group_000", 00:25:14.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:14.413 "listen_address": { 00:25:14.413 "trtype": "RDMA", 00:25:14.413 "adrfam": "IPv4", 00:25:14.413 "traddr": "192.168.100.8", 00:25:14.413 "trsvcid": "4420" 00:25:14.413 }, 00:25:14.413 "peer_address": { 00:25:14.413 "trtype": "RDMA", 00:25:14.413 "adrfam": "IPv4", 00:25:14.413 "traddr": "192.168.100.8", 00:25:14.413 "trsvcid": "56339" 00:25:14.413 }, 00:25:14.413 "auth": { 00:25:14.413 "state": "completed", 00:25:14.413 "digest": "sha512", 00:25:14.413 "dhgroup": "null" 00:25:14.413 } 00:25:14.413 } 00:25:14.413 ]' 00:25:14.413 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:14.413 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:14.413 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:14.413 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:25:14.413 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:14.413 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:14.413 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:14.413 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:14.672 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:14.672 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:15.364 13:56:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:15.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:15.364 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:15.364 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.364 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.364 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.364 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:15.364 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:15.364 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:15.622 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:15.879 00:25:15.879 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:15.879 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:15.879 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:16.138 { 00:25:16.138 "cntlid": 103, 00:25:16.138 "qid": 0, 00:25:16.138 "state": "enabled", 00:25:16.138 "thread": "nvmf_tgt_poll_group_000", 00:25:16.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:16.138 "listen_address": { 00:25:16.138 "trtype": "RDMA", 00:25:16.138 "adrfam": "IPv4", 00:25:16.138 "traddr": "192.168.100.8", 00:25:16.138 "trsvcid": "4420" 00:25:16.138 }, 00:25:16.138 "peer_address": { 00:25:16.138 "trtype": "RDMA", 00:25:16.138 "adrfam": "IPv4", 00:25:16.138 "traddr": "192.168.100.8", 00:25:16.138 "trsvcid": "34567" 00:25:16.138 }, 00:25:16.138 "auth": { 00:25:16.138 "state": "completed", 00:25:16.138 "digest": "sha512", 00:25:16.138 "dhgroup": "null" 00:25:16.138 } 00:25:16.138 } 00:25:16.138 ]' 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:16.138 13:56:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:16.397 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:16.397 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:16.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:16.962 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.220 13:56:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.479 00:25:17.479 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:17.479 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:17.479 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:17.738 { 00:25:17.738 "cntlid": 105, 00:25:17.738 "qid": 0, 00:25:17.738 "state": "enabled", 00:25:17.738 "thread": "nvmf_tgt_poll_group_000", 00:25:17.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:17.738 "listen_address": { 00:25:17.738 "trtype": "RDMA", 00:25:17.738 "adrfam": "IPv4", 00:25:17.738 "traddr": "192.168.100.8", 00:25:17.738 "trsvcid": "4420" 00:25:17.738 }, 00:25:17.738 "peer_address": { 00:25:17.738 "trtype": "RDMA", 00:25:17.738 "adrfam": "IPv4", 00:25:17.738 "traddr": "192.168.100.8", 00:25:17.738 "trsvcid": "48458" 00:25:17.738 }, 00:25:17.738 "auth": { 00:25:17.738 "state": "completed", 00:25:17.738 "digest": "sha512", 00:25:17.738 "dhgroup": "ffdhe2048" 00:25:17.738 } 00:25:17.738 } 00:25:17.738 ]' 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:17.738 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:17.739 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:17.739 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:17.997 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:17.997 13:56:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:18.567 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:18.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:18.567 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:18.567 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.567 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.567 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.567 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:18.567 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:18.567 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.826 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.086 00:25:19.086 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:19.086 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:19.086 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:19.346 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.346 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:19.346 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.346 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.346 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.346 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:19.346 { 00:25:19.346 "cntlid": 107, 00:25:19.346 "qid": 0, 00:25:19.346 "state": "enabled", 00:25:19.346 "thread": "nvmf_tgt_poll_group_000", 00:25:19.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:19.346 "listen_address": { 00:25:19.346 "trtype": "RDMA", 00:25:19.346 "adrfam": "IPv4", 00:25:19.346 "traddr": "192.168.100.8", 00:25:19.346 "trsvcid": "4420" 00:25:19.346 }, 00:25:19.346 "peer_address": { 00:25:19.346 "trtype": "RDMA", 00:25:19.346 "adrfam": "IPv4", 00:25:19.346 "traddr": "192.168.100.8", 00:25:19.346 "trsvcid": "42444" 00:25:19.346 }, 00:25:19.346 "auth": { 00:25:19.346 "state": "completed", 00:25:19.346 "digest": "sha512", 00:25:19.346 "dhgroup": "ffdhe2048" 00:25:19.346 } 00:25:19.346 } 00:25:19.346 ]' 00:25:19.346 13:56:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:19.346 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:19.346 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:19.346 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:19.346 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:19.346 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:19.346 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:19.346 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:19.605 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:19.605 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:20.173 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:20.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:20.173 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:20.173 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.173 13:56:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.173 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.173 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:20.173 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:20.173 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:20.432 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.433 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.692 00:25:20.692 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:20.692 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:20.692 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:20.951 { 00:25:20.951 "cntlid": 109, 00:25:20.951 "qid": 0, 00:25:20.951 "state": "enabled", 00:25:20.951 "thread": "nvmf_tgt_poll_group_000", 00:25:20.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:20.951 "listen_address": { 00:25:20.951 "trtype": "RDMA", 00:25:20.951 "adrfam": "IPv4", 00:25:20.951 "traddr": "192.168.100.8", 00:25:20.951 "trsvcid": "4420" 00:25:20.951 }, 00:25:20.951 "peer_address": { 00:25:20.951 "trtype": "RDMA", 00:25:20.951 "adrfam": "IPv4", 00:25:20.951 "traddr": "192.168.100.8", 00:25:20.951 "trsvcid": "40610" 00:25:20.951 }, 00:25:20.951 "auth": { 00:25:20.951 "state": "completed", 00:25:20.951 "digest": "sha512", 00:25:20.951 "dhgroup": "ffdhe2048" 00:25:20.951 } 00:25:20.951 } 00:25:20.951 ]' 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:20.951 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:21.210 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:21.210 13:56:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:21.778 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:22.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:22.037 13:56:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:22.296 00:25:22.296 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:22.296 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:22.296 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:22.555 { 00:25:22.555 "cntlid": 111, 00:25:22.555 "qid": 0, 00:25:22.555 "state": "enabled", 00:25:22.555 "thread": "nvmf_tgt_poll_group_000", 00:25:22.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:22.555 "listen_address": { 00:25:22.555 "trtype": "RDMA", 00:25:22.555 "adrfam": "IPv4", 00:25:22.555 "traddr": "192.168.100.8", 00:25:22.555 "trsvcid": "4420" 00:25:22.555 }, 00:25:22.555 "peer_address": { 00:25:22.555 "trtype": "RDMA", 00:25:22.555 "adrfam": "IPv4", 00:25:22.555 "traddr": "192.168.100.8", 00:25:22.555 "trsvcid": "58231" 00:25:22.555 }, 00:25:22.555 "auth": { 00:25:22.555 "state": "completed", 00:25:22.555 "digest": "sha512", 00:25:22.555 "dhgroup": "ffdhe2048" 00:25:22.555 } 00:25:22.555 } 00:25:22.555 ]' 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:22.555 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:22.814 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:22.814 13:56:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:23.382 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:23.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.642 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.901 00:25:23.901 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:23.901 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:23.901 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:24.160 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.160 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:24.160 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.160 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.160 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.160 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:24.160 { 00:25:24.160 "cntlid": 113, 00:25:24.160 "qid": 0, 00:25:24.160 "state": "enabled", 00:25:24.160 "thread": "nvmf_tgt_poll_group_000", 00:25:24.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:24.160 "listen_address": { 00:25:24.160 "trtype": "RDMA", 00:25:24.160 "adrfam": "IPv4", 00:25:24.160 "traddr": "192.168.100.8", 00:25:24.160 "trsvcid": "4420" 00:25:24.160 }, 00:25:24.160 "peer_address": { 00:25:24.160 "trtype": "RDMA", 00:25:24.160 "adrfam": "IPv4", 00:25:24.160 "traddr": "192.168.100.8", 00:25:24.160 "trsvcid": "46306" 00:25:24.160 }, 00:25:24.160 "auth": { 00:25:24.160 "state": "completed", 00:25:24.160 "digest": "sha512", 00:25:24.160 "dhgroup": "ffdhe3072" 00:25:24.160 } 00:25:24.161 } 00:25:24.161 ]' 00:25:24.161 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:24.161 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:24.161 13:56:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:24.161 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:24.161 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:24.419 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:24.419 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:24.419 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:24.420 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:24.420 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:24.988 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:25.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:25.247 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:25.247 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.247 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.247 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.247 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:25.247 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:25.247 13:56:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.506 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.764 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:25.764 { 00:25:25.764 "cntlid": 115, 00:25:25.764 "qid": 0, 00:25:25.764 "state": "enabled", 00:25:25.764 "thread": "nvmf_tgt_poll_group_000", 00:25:25.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:25.764 "listen_address": { 00:25:25.764 "trtype": "RDMA", 00:25:25.764 "adrfam": "IPv4", 00:25:25.764 "traddr": "192.168.100.8", 00:25:25.764 "trsvcid": "4420" 00:25:25.764 }, 00:25:25.764 "peer_address": { 00:25:25.764 "trtype": "RDMA", 00:25:25.764 "adrfam": "IPv4", 00:25:25.764 "traddr": "192.168.100.8", 00:25:25.764 "trsvcid": "38469" 00:25:25.764 }, 00:25:25.764 "auth": { 00:25:25.764 "state": "completed", 00:25:25.764 "digest": "sha512", 00:25:25.764 "dhgroup": "ffdhe3072" 00:25:25.764 } 00:25:25.764 } 00:25:25.764 ]' 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:25.764 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:26.022 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:26.022 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:26.022 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:26.022 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:26.022 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:26.022 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:26.281 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:26.281 13:56:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:26.847 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:26.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:26.847 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:26.847 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.847 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.847 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.847 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:26.847 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:26.847 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.105 13:56:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.363 00:25:27.363 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:27.363 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:27.363 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:27.620 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.620 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:27.620 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:27.621 { 00:25:27.621 "cntlid": 117, 00:25:27.621 "qid": 0, 00:25:27.621 "state": "enabled", 00:25:27.621 "thread": "nvmf_tgt_poll_group_000", 00:25:27.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:27.621 "listen_address": { 00:25:27.621 "trtype": "RDMA", 00:25:27.621 "adrfam": "IPv4", 00:25:27.621 "traddr": "192.168.100.8", 00:25:27.621 "trsvcid": "4420" 00:25:27.621 }, 00:25:27.621 "peer_address": { 00:25:27.621 "trtype": "RDMA", 00:25:27.621 "adrfam": "IPv4", 00:25:27.621 "traddr": "192.168.100.8", 00:25:27.621 "trsvcid": "53896" 00:25:27.621 }, 00:25:27.621 "auth": { 00:25:27.621 "state": "completed", 00:25:27.621 "digest": "sha512", 00:25:27.621 "dhgroup": "ffdhe3072" 00:25:27.621 } 00:25:27.621 } 00:25:27.621 ]' 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:27.621 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:27.879 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:27.879 13:56:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:28.445 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:28.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:28.446 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:28.446 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.446 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.446 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.446 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:28.446 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:28.446 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:28.703 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:28.961 00:25:28.961 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:28.961 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:28.961 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:29.219 { 00:25:29.219 "cntlid": 119, 00:25:29.219 "qid": 0, 00:25:29.219 "state": "enabled", 00:25:29.219 "thread": "nvmf_tgt_poll_group_000", 00:25:29.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:29.219 "listen_address": { 00:25:29.219 "trtype": "RDMA", 00:25:29.219 "adrfam": "IPv4", 00:25:29.219 "traddr": "192.168.100.8", 00:25:29.219 "trsvcid": "4420" 00:25:29.219 }, 00:25:29.219 "peer_address": { 00:25:29.219 "trtype": "RDMA", 00:25:29.219 "adrfam": "IPv4", 00:25:29.219 "traddr": "192.168.100.8", 00:25:29.219 "trsvcid": "34860" 00:25:29.219 }, 00:25:29.219 "auth": { 00:25:29.219 "state": "completed", 00:25:29.219 "digest": "sha512", 00:25:29.219 "dhgroup": "ffdhe3072" 00:25:29.219 } 00:25:29.219 } 00:25:29.219 ]' 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:29.219 13:56:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:29.219 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:29.219 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:29.219 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:29.476 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:29.476 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:30.040 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:30.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:30.297 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:30.297 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.297 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.297 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.297 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.297 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:30.297 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.297 13:56:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.297 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.554 00:25:30.554 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:30.554 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:30.554 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:30.812 { 00:25:30.812 "cntlid": 121, 00:25:30.812 "qid": 0, 00:25:30.812 "state": "enabled", 00:25:30.812 "thread": "nvmf_tgt_poll_group_000", 00:25:30.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:30.812 "listen_address": { 00:25:30.812 "trtype": "RDMA", 00:25:30.812 "adrfam": "IPv4", 00:25:30.812 "traddr": "192.168.100.8", 00:25:30.812 "trsvcid": "4420" 00:25:30.812 }, 00:25:30.812 "peer_address": { 00:25:30.812 "trtype": "RDMA", 00:25:30.812 "adrfam": "IPv4", 00:25:30.812 "traddr": "192.168.100.8", 00:25:30.812 "trsvcid": "48714" 00:25:30.812 }, 00:25:30.812 "auth": { 00:25:30.812 "state": "completed", 00:25:30.812 "digest": "sha512", 00:25:30.812 "dhgroup": "ffdhe4096" 00:25:30.812 } 00:25:30.812 } 00:25:30.812 ]' 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:30.812 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:31.070 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:31.070 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:31.070 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:31.070 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:31.070 13:56:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:31.637 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:31.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:31.895 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:31.895 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.895 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.895 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.895 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:31.895 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:31.895 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.153 13:56:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.411 00:25:32.411 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:32.411 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:32.411 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:32.411 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.411 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:32.411 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.411 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.411 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:32.669 { 00:25:32.669 "cntlid": 123, 00:25:32.669 "qid": 0, 00:25:32.669 "state": "enabled", 00:25:32.669 "thread": "nvmf_tgt_poll_group_000", 00:25:32.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:32.669 "listen_address": { 00:25:32.669 "trtype": "RDMA", 00:25:32.669 "adrfam": "IPv4", 00:25:32.669 "traddr": "192.168.100.8", 00:25:32.669 "trsvcid": "4420" 00:25:32.669 }, 00:25:32.669 "peer_address": { 00:25:32.669 "trtype": "RDMA", 00:25:32.669 "adrfam": "IPv4", 00:25:32.669 "traddr": "192.168.100.8", 00:25:32.669 "trsvcid": "52007" 00:25:32.669 }, 00:25:32.669 "auth": { 00:25:32.669 "state": "completed", 00:25:32.669 "digest": "sha512", 00:25:32.669 "dhgroup": "ffdhe4096" 00:25:32.669 } 00:25:32.669 } 00:25:32.669 ]' 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:32.669 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:32.927 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:32.927 13:56:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:33.494 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:33.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:33.494 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:33.494 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.494 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.494 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.494 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:33.494 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.494 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.753 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.010 00:25:34.010 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:34.010 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:34.010 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:34.268 { 00:25:34.268 "cntlid": 125, 00:25:34.268 "qid": 0, 00:25:34.268 "state": "enabled", 00:25:34.268 "thread": "nvmf_tgt_poll_group_000", 00:25:34.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:34.268 "listen_address": { 00:25:34.268 "trtype": "RDMA", 00:25:34.268 "adrfam": "IPv4", 00:25:34.268 "traddr": "192.168.100.8", 00:25:34.268 "trsvcid": "4420" 00:25:34.268 }, 00:25:34.268 "peer_address": { 00:25:34.268 "trtype": "RDMA", 00:25:34.268 "adrfam": "IPv4", 00:25:34.268 "traddr": "192.168.100.8", 00:25:34.268 "trsvcid": "53482" 00:25:34.268 }, 00:25:34.268 "auth": { 00:25:34.268 "state": "completed", 00:25:34.268 "digest": "sha512", 00:25:34.268 "dhgroup": "ffdhe4096" 00:25:34.268 } 00:25:34.268 } 00:25:34.268 ]' 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:34.268 13:56:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:34.268 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:34.268 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:34.268 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:34.268 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:34.268 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:34.527 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:34.527 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:35.093 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:35.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:35.093 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:35.093 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.093 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.351 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.351 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:35.351 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.351 13:56:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:35.351 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:35.609 00:25:35.609 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:35.609 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:35.609 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:35.866 { 00:25:35.866 "cntlid": 127, 00:25:35.866 "qid": 0, 00:25:35.866 "state": "enabled", 00:25:35.866 "thread": "nvmf_tgt_poll_group_000", 00:25:35.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:35.866 "listen_address": { 00:25:35.866 "trtype": "RDMA", 00:25:35.866 "adrfam": "IPv4", 00:25:35.866 "traddr": "192.168.100.8", 00:25:35.866 "trsvcid": "4420" 00:25:35.866 }, 00:25:35.866 "peer_address": { 00:25:35.866 "trtype": "RDMA", 00:25:35.866 "adrfam": "IPv4", 00:25:35.866 "traddr": "192.168.100.8", 00:25:35.866 "trsvcid": "34061" 00:25:35.866 }, 00:25:35.866 "auth": { 00:25:35.866 "state": "completed", 00:25:35.866 "digest": "sha512", 00:25:35.866 "dhgroup": "ffdhe4096" 00:25:35.866 } 00:25:35.866 } 00:25:35.866 ]' 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:35.866 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:36.122 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:36.122 13:56:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:36.687 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:36.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.945 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.203 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.203 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.203 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.203 13:56:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.461 00:25:37.461 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:37.461 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:37.461 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:37.721 { 00:25:37.721 "cntlid": 129, 00:25:37.721 "qid": 0, 00:25:37.721 "state": "enabled", 00:25:37.721 "thread": "nvmf_tgt_poll_group_000", 00:25:37.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:37.721 "listen_address": { 00:25:37.721 "trtype": "RDMA", 00:25:37.721 "adrfam": "IPv4", 00:25:37.721 "traddr": "192.168.100.8", 00:25:37.721 "trsvcid": "4420" 00:25:37.721 }, 00:25:37.721 "peer_address": { 00:25:37.721 "trtype": "RDMA", 00:25:37.721 "adrfam": "IPv4", 00:25:37.721 "traddr": "192.168.100.8", 00:25:37.721 "trsvcid": "42350" 00:25:37.721 }, 00:25:37.721 "auth": { 00:25:37.721 "state": "completed", 00:25:37.721 "digest": "sha512", 00:25:37.721 "dhgroup": "ffdhe6144" 00:25:37.721 } 00:25:37.721 } 00:25:37.721 ]' 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:37.721 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:37.980 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:37.980 13:56:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:38.547 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:38.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:38.547 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:38.547 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.547 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.547 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.547 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:38.547 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.547 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.806 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.065 00:25:39.065 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:39.065 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:39.065 13:56:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:39.324 { 00:25:39.324 "cntlid": 131, 00:25:39.324 "qid": 0, 00:25:39.324 "state": "enabled", 00:25:39.324 "thread": "nvmf_tgt_poll_group_000", 00:25:39.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:39.324 "listen_address": { 00:25:39.324 "trtype": "RDMA", 00:25:39.324 "adrfam": "IPv4", 00:25:39.324 "traddr": "192.168.100.8", 00:25:39.324 "trsvcid": "4420" 00:25:39.324 }, 00:25:39.324 "peer_address": { 00:25:39.324 "trtype": "RDMA", 00:25:39.324 "adrfam": "IPv4", 00:25:39.324 "traddr": "192.168.100.8", 00:25:39.324 "trsvcid": "39704" 00:25:39.324 }, 00:25:39.324 "auth": { 00:25:39.324 "state": "completed", 00:25:39.324 "digest": "sha512", 00:25:39.324 "dhgroup": "ffdhe6144" 00:25:39.324 } 00:25:39.324 } 00:25:39.324 ]' 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:39.324 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:39.584 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:39.584 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:39.584 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:39.584 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:39.584 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:39.584 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:39.584 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:40.151 13:56:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:40.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:40.410 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:40.410 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.410 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.410 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.410 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:40.410 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:40.410 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.669 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.929 00:25:40.929 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:40.929 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:40.929 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:41.188 { 00:25:41.188 "cntlid": 133, 00:25:41.188 "qid": 0, 00:25:41.188 "state": "enabled", 00:25:41.188 "thread": "nvmf_tgt_poll_group_000", 00:25:41.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:41.188 "listen_address": { 00:25:41.188 "trtype": "RDMA", 00:25:41.188 "adrfam": "IPv4", 00:25:41.188 "traddr": "192.168.100.8", 00:25:41.188 "trsvcid": "4420" 00:25:41.188 }, 00:25:41.188 "peer_address": { 00:25:41.188 "trtype": "RDMA", 00:25:41.188 "adrfam": "IPv4", 00:25:41.188 "traddr": "192.168.100.8", 00:25:41.188 "trsvcid": "55652" 00:25:41.188 }, 00:25:41.188 "auth": { 00:25:41.188 "state": "completed", 00:25:41.188 "digest": "sha512", 00:25:41.188 "dhgroup": "ffdhe6144" 00:25:41.188 } 00:25:41.188 } 00:25:41.188 ]' 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:41.188 13:56:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:41.447 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:41.447 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:42.014 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:42.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:42.014 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:42.014 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.014 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.273 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.273 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:42.273 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.273 13:56:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.273 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:42.274 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:42.533 00:25:42.792 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:42.792 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:42.792 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:42.792 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.792 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:42.792 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.792 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.793 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.793 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:42.793 { 00:25:42.793 "cntlid": 135, 00:25:42.793 "qid": 0, 00:25:42.793 "state": "enabled", 00:25:42.793 "thread": "nvmf_tgt_poll_group_000", 00:25:42.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:42.793 "listen_address": { 00:25:42.793 "trtype": "RDMA", 00:25:42.793 "adrfam": "IPv4", 00:25:42.793 "traddr": "192.168.100.8", 00:25:42.793 "trsvcid": "4420" 00:25:42.793 }, 00:25:42.793 "peer_address": { 00:25:42.793 "trtype": "RDMA", 00:25:42.793 "adrfam": "IPv4", 00:25:42.793 "traddr": "192.168.100.8", 00:25:42.793 "trsvcid": "47367" 00:25:42.793 }, 00:25:42.793 "auth": { 00:25:42.793 "state": "completed", 00:25:42.793 "digest": "sha512", 00:25:42.793 "dhgroup": "ffdhe6144" 00:25:42.793 } 00:25:42.793 } 00:25:42.793 ]' 00:25:42.793 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:42.793 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:42.793 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:43.051 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:43.051 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:43.051 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:43.051 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:43.051 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:43.310 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:43.310 13:56:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:43.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:43.877 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.136 13:56:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.394 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:44.652 { 00:25:44.652 "cntlid": 137, 00:25:44.652 "qid": 0, 00:25:44.652 "state": "enabled", 00:25:44.652 "thread": "nvmf_tgt_poll_group_000", 00:25:44.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:44.652 "listen_address": { 00:25:44.652 "trtype": "RDMA", 00:25:44.652 "adrfam": "IPv4", 00:25:44.652 "traddr": "192.168.100.8", 00:25:44.652 "trsvcid": "4420" 00:25:44.652 }, 00:25:44.652 "peer_address": { 00:25:44.652 "trtype": "RDMA", 00:25:44.652 "adrfam": "IPv4", 00:25:44.652 "traddr": "192.168.100.8", 00:25:44.652 "trsvcid": "57721" 00:25:44.652 }, 00:25:44.652 "auth": { 00:25:44.652 "state": "completed", 00:25:44.652 "digest": "sha512", 00:25:44.652 "dhgroup": "ffdhe8192" 00:25:44.652 } 00:25:44.652 } 00:25:44.652 ]' 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:44.652 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:44.911 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:44.911 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:44.911 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:44.911 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:44.911 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:44.911 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:44.911 13:56:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:45.476 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:45.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.795 13:56:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.362 00:25:46.362 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:46.362 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:46.362 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:46.620 { 00:25:46.620 "cntlid": 139, 00:25:46.620 "qid": 0, 00:25:46.620 "state": "enabled", 00:25:46.620 "thread": "nvmf_tgt_poll_group_000", 00:25:46.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:46.620 "listen_address": { 00:25:46.620 "trtype": "RDMA", 00:25:46.620 "adrfam": "IPv4", 00:25:46.620 "traddr": "192.168.100.8", 00:25:46.620 "trsvcid": "4420" 00:25:46.620 }, 00:25:46.620 "peer_address": { 00:25:46.620 "trtype": "RDMA", 00:25:46.620 "adrfam": "IPv4", 00:25:46.620 "traddr": "192.168.100.8", 00:25:46.620 "trsvcid": "60783" 00:25:46.620 }, 00:25:46.620 "auth": { 00:25:46.620 "state": "completed", 00:25:46.620 "digest": "sha512", 00:25:46.620 "dhgroup": "ffdhe8192" 00:25:46.620 } 00:25:46.620 } 00:25:46.620 ]' 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:46.620 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:46.877 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:46.877 13:56:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: --dhchap-ctrl-secret DHHC-1:02:YWMyZDczYzA3MWM5ZDI5ZjZlZjVjYjYxZTgxMTkxYTMxZWJlNTgxODExYTE1M2YwB17WsA==: 00:25:47.443 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:47.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.701 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.270 00:25:48.270 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:48.270 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:48.270 13:56:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:48.529 { 00:25:48.529 "cntlid": 141, 00:25:48.529 "qid": 0, 00:25:48.529 "state": "enabled", 00:25:48.529 "thread": "nvmf_tgt_poll_group_000", 00:25:48.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:48.529 "listen_address": { 00:25:48.529 "trtype": "RDMA", 00:25:48.529 "adrfam": "IPv4", 00:25:48.529 "traddr": "192.168.100.8", 00:25:48.529 "trsvcid": "4420" 00:25:48.529 }, 00:25:48.529 "peer_address": { 00:25:48.529 "trtype": "RDMA", 00:25:48.529 "adrfam": "IPv4", 00:25:48.529 "traddr": "192.168.100.8", 00:25:48.529 "trsvcid": "44380" 00:25:48.529 }, 00:25:48.529 "auth": { 00:25:48.529 "state": "completed", 00:25:48.529 "digest": "sha512", 00:25:48.529 "dhgroup": "ffdhe8192" 00:25:48.529 } 00:25:48.529 } 00:25:48.529 ]' 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:48.529 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:48.788 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:48.788 13:56:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:01:OTgxZWY0M2NiMWNlZmUwNjg3MGY5ODQ3YmE4MGJlZGH+9nhS: 00:25:49.355 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:49.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:49.355 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:49.355 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.355 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.355 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.355 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:49.355 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.355 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.614 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:49.615 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:50.183 00:25:50.183 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:50.183 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:50.183 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:50.183 13:56:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.183 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:50.183 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.183 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.183 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.183 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:50.183 { 00:25:50.183 "cntlid": 143, 00:25:50.183 "qid": 0, 00:25:50.183 "state": "enabled", 00:25:50.183 "thread": "nvmf_tgt_poll_group_000", 00:25:50.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:50.183 "listen_address": { 00:25:50.183 "trtype": "RDMA", 00:25:50.183 "adrfam": "IPv4", 00:25:50.183 "traddr": "192.168.100.8", 00:25:50.183 "trsvcid": "4420" 00:25:50.183 }, 00:25:50.183 "peer_address": { 00:25:50.183 "trtype": "RDMA", 00:25:50.183 "adrfam": "IPv4", 00:25:50.183 "traddr": "192.168.100.8", 00:25:50.183 "trsvcid": "55150" 00:25:50.183 }, 00:25:50.183 "auth": { 00:25:50.183 "state": "completed", 00:25:50.183 "digest": "sha512", 00:25:50.183 "dhgroup": "ffdhe8192" 00:25:50.183 } 00:25:50.183 } 00:25:50.183 ]' 00:25:50.183 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:50.441 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:50.441 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:50.441 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:50.441 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:50.441 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:50.441 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:50.441 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:50.700 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:50.700 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:51.267 13:56:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:51.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:51.267 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:51.268 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.526 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.094 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:52.094 { 00:25:52.094 "cntlid": 145, 00:25:52.094 "qid": 0, 00:25:52.094 "state": "enabled", 00:25:52.094 "thread": "nvmf_tgt_poll_group_000", 00:25:52.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:52.094 "listen_address": { 00:25:52.094 "trtype": "RDMA", 00:25:52.094 "adrfam": "IPv4", 00:25:52.094 "traddr": "192.168.100.8", 00:25:52.094 "trsvcid": "4420" 00:25:52.094 }, 00:25:52.094 "peer_address": { 00:25:52.094 "trtype": "RDMA", 00:25:52.094 "adrfam": "IPv4", 00:25:52.094 "traddr": "192.168.100.8", 00:25:52.094 "trsvcid": "57820" 00:25:52.094 }, 00:25:52.094 "auth": { 00:25:52.094 "state": "completed", 00:25:52.094 "digest": "sha512", 00:25:52.094 "dhgroup": "ffdhe8192" 00:25:52.094 } 00:25:52.094 } 00:25:52.094 ]' 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:52.094 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:52.356 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:52.356 13:56:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:52.356 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:52.356 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:52.356 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:52.356 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:52.356 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODAzZDVlMDFlMzFkNmU3OWY5NDhkNjQ1Zjk4YmRiZTE0NTdiMzllOGYwYzkxOGQwj682eA==: --dhchap-ctrl-secret DHHC-1:03:M2RhOGMzZGFlYjJmMDhlNWZlZTc2Njc1YWJiM2NmMWE3ZmQ0NDBjNTkxMGIxYmJiYTVkYTM0ZmNjNWUxYTI4NCRCfwA=: 00:25:53.050 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:53.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:25:53.309 13:56:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:25:53.568 request: 00:25:53.568 { 00:25:53.568 "name": "nvme0", 00:25:53.568 "trtype": "rdma", 00:25:53.568 "traddr": "192.168.100.8", 00:25:53.568 "adrfam": "ipv4", 00:25:53.568 "trsvcid": "4420", 00:25:53.568 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:53.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:53.568 "prchk_reftag": false, 00:25:53.568 "prchk_guard": false, 00:25:53.568 "hdgst": false, 00:25:53.568 "ddgst": false, 00:25:53.568 "dhchap_key": "key2", 00:25:53.568 "allow_unrecognized_csi": false, 00:25:53.568 "method": "bdev_nvme_attach_controller", 00:25:53.568 "req_id": 1 00:25:53.568 } 00:25:53.568 Got JSON-RPC error response 00:25:53.568 response: 00:25:53.568 { 00:25:53.568 "code": -5, 00:25:53.568 "message": "Input/output error" 00:25:53.568 } 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:53.568 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:54.141 request: 00:25:54.141 { 00:25:54.141 "name": "nvme0", 00:25:54.141 "trtype": "rdma", 00:25:54.141 "traddr": "192.168.100.8", 00:25:54.141 "adrfam": "ipv4", 00:25:54.141 "trsvcid": "4420", 00:25:54.141 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:54.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:54.141 "prchk_reftag": false, 00:25:54.141 "prchk_guard": false, 00:25:54.141 "hdgst": false, 00:25:54.141 "ddgst": false, 00:25:54.141 "dhchap_key": "key1", 00:25:54.141 "dhchap_ctrlr_key": "ckey2", 00:25:54.141 "allow_unrecognized_csi": false, 00:25:54.141 "method": "bdev_nvme_attach_controller", 00:25:54.141 "req_id": 1 00:25:54.141 } 00:25:54.141 Got JSON-RPC error response 00:25:54.141 response: 00:25:54.141 { 00:25:54.141 "code": -5, 00:25:54.141 "message": "Input/output error" 00:25:54.141 } 00:25:54.141 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:25:54.141 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:54.141 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:54.141 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:54.141 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.142 13:56:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.708 request: 00:25:54.708 { 00:25:54.708 "name": "nvme0", 00:25:54.708 "trtype": "rdma", 00:25:54.708 "traddr": "192.168.100.8", 00:25:54.708 "adrfam": "ipv4", 00:25:54.708 "trsvcid": "4420", 00:25:54.708 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:54.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:54.708 "prchk_reftag": false, 00:25:54.708 "prchk_guard": false, 00:25:54.708 "hdgst": false, 00:25:54.708 "ddgst": false, 00:25:54.708 "dhchap_key": "key1", 00:25:54.708 "dhchap_ctrlr_key": "ckey1", 00:25:54.708 "allow_unrecognized_csi": false, 00:25:54.708 "method": "bdev_nvme_attach_controller", 00:25:54.708 "req_id": 1 00:25:54.708 } 00:25:54.708 Got JSON-RPC error response 00:25:54.708 response: 00:25:54.708 { 00:25:54.708 "code": -5, 00:25:54.708 "message": "Input/output error" 00:25:54.708 } 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1762494 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1762494 ']' 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1762494 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762494 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762494' 00:25:54.708 killing process with pid 1762494 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1762494 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1762494 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.708 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1786732 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1786732 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1786732 ']' 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1786732 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1786732 ']' 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.968 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.227 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.227 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:25:55.227 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:25:55.227 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.227 13:56:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.227 null0 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dFq 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.SQp ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SQp 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.LYl 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.GcJ ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GcJ 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.dgD 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.NtL ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NtL 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.WJO 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:25:55.486 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:55.487 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:56.054 nvme0n1 00:25:56.312 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:56.312 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:56.312 13:56:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:56.312 { 00:25:56.312 "cntlid": 1, 00:25:56.312 "qid": 0, 00:25:56.312 "state": "enabled", 00:25:56.312 "thread": "nvmf_tgt_poll_group_000", 00:25:56.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:56.312 "listen_address": { 00:25:56.312 "trtype": "RDMA", 00:25:56.312 "adrfam": "IPv4", 00:25:56.312 "traddr": "192.168.100.8", 00:25:56.312 "trsvcid": "4420" 00:25:56.312 }, 00:25:56.312 "peer_address": { 00:25:56.312 "trtype": "RDMA", 00:25:56.312 "adrfam": "IPv4", 00:25:56.312 "traddr": "192.168.100.8", 00:25:56.312 "trsvcid": "39350" 00:25:56.312 }, 00:25:56.312 "auth": { 00:25:56.312 "state": "completed", 00:25:56.312 "digest": "sha512", 00:25:56.312 "dhgroup": "ffdhe8192" 00:25:56.312 } 00:25:56.312 } 00:25:56.312 ]' 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:56.312 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:56.570 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:56.570 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:56.570 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:56.570 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:56.570 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:56.829 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:56.829 13:56:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:25:57.397 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:57.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:25:57.398 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:57.657 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:57.917 request: 00:25:57.917 { 00:25:57.917 "name": "nvme0", 00:25:57.917 "trtype": "rdma", 00:25:57.917 "traddr": "192.168.100.8", 00:25:57.917 "adrfam": "ipv4", 00:25:57.917 "trsvcid": "4420", 00:25:57.917 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:57.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:57.917 "prchk_reftag": false, 00:25:57.917 "prchk_guard": false, 00:25:57.917 "hdgst": false, 00:25:57.917 "ddgst": false, 00:25:57.917 "dhchap_key": "key3", 00:25:57.917 "allow_unrecognized_csi": false, 00:25:57.917 "method": "bdev_nvme_attach_controller", 00:25:57.917 "req_id": 1 00:25:57.917 } 00:25:57.917 Got JSON-RPC error response 00:25:57.917 response: 00:25:57.917 { 00:25:57.917 "code": -5, 00:25:57.917 "message": "Input/output error" 00:25:57.917 } 00:25:57.917 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:25:57.917 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:57.917 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:57.917 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:57.917 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:25:57.917 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:25:57.917 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:25:57.917 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:58.177 13:56:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:58.177 request: 00:25:58.177 { 00:25:58.177 "name": "nvme0", 00:25:58.177 "trtype": "rdma", 00:25:58.177 "traddr": "192.168.100.8", 00:25:58.177 "adrfam": "ipv4", 00:25:58.177 "trsvcid": "4420", 00:25:58.177 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:58.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:58.177 "prchk_reftag": false, 00:25:58.177 "prchk_guard": false, 00:25:58.177 "hdgst": false, 00:25:58.177 "ddgst": false, 00:25:58.177 "dhchap_key": "key3", 00:25:58.177 "allow_unrecognized_csi": false, 00:25:58.177 "method": "bdev_nvme_attach_controller", 00:25:58.177 "req_id": 1 00:25:58.177 } 00:25:58.177 Got JSON-RPC error response 00:25:58.177 response: 00:25:58.177 { 00:25:58.177 "code": -5, 00:25:58.177 "message": "Input/output error" 00:25:58.177 } 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:58.177 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:58.437 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:58.696 request: 00:25:58.696 { 00:25:58.696 "name": "nvme0", 00:25:58.696 "trtype": "rdma", 00:25:58.696 "traddr": "192.168.100.8", 00:25:58.696 "adrfam": "ipv4", 00:25:58.696 "trsvcid": "4420", 00:25:58.696 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:58.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:25:58.696 "prchk_reftag": false, 00:25:58.696 "prchk_guard": false, 00:25:58.696 "hdgst": false, 00:25:58.696 "ddgst": false, 00:25:58.696 "dhchap_key": "key0", 00:25:58.696 "dhchap_ctrlr_key": "key1", 00:25:58.696 "allow_unrecognized_csi": false, 00:25:58.696 "method": "bdev_nvme_attach_controller", 00:25:58.696 "req_id": 1 00:25:58.696 } 00:25:58.696 Got JSON-RPC error response 00:25:58.696 response: 00:25:58.696 { 00:25:58.696 "code": -5, 00:25:58.696 "message": "Input/output error" 00:25:58.696 } 00:25:58.955 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:25:58.955 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.955 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.955 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.955 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:25:58.955 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:25:58.955 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:25:58.955 nvme0n1 00:25:59.215 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:25:59.215 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:25:59.215 13:56:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:59.215 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.215 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:59.215 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:59.475 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:25:59.475 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.475 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.475 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.475 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:25:59.475 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:25:59.475 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:26:00.044 nvme0n1 00:26:00.303 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:26:00.303 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:00.303 13:56:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:26:00.303 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.303 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:00.303 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.303 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:00.303 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.303 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:26:00.303 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:26:00.303 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:00.563 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.563 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:26:00.563 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: --dhchap-ctrl-secret DHHC-1:03:MDU2MGQ0Zjk3MGRlOGZmZjNiNTFmZTAxZTk3M2E2OWMxMGQwODU5NDIwODhlMjNjYTA1ZWJiMjMxYThlY2U2YWHslm0=: 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:01.131 13:57:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:26:01.390 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:26:01.958 request: 00:26:01.958 { 00:26:01.958 "name": "nvme0", 00:26:01.958 "trtype": "rdma", 00:26:01.958 "traddr": "192.168.100.8", 00:26:01.958 "adrfam": "ipv4", 00:26:01.958 "trsvcid": "4420", 00:26:01.958 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:01.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:26:01.958 "prchk_reftag": false, 00:26:01.958 "prchk_guard": false, 00:26:01.958 "hdgst": false, 00:26:01.958 "ddgst": false, 00:26:01.958 "dhchap_key": "key1", 00:26:01.958 "allow_unrecognized_csi": false, 00:26:01.958 "method": "bdev_nvme_attach_controller", 00:26:01.958 "req_id": 1 00:26:01.958 } 00:26:01.958 Got JSON-RPC error response 00:26:01.958 response: 00:26:01.958 { 00:26:01.958 "code": -5, 00:26:01.958 "message": "Input/output error" 00:26:01.958 } 00:26:01.958 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:26:01.958 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.958 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.958 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.958 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:01.959 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:01.959 13:57:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:02.526 nvme0n1 00:26:02.526 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:26:02.526 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:26:02.526 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:02.785 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.785 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:02.785 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:02.785 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:26:03.044 nvme0n1 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:26:03.044 13:57:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:03.303 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.303 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:03.303 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: '' 2s 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: ]] 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTdjNTRmZDg5N2Q5M2IwNjk0NDA5NTkzNjg0YWZmNDU20hyw: 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:26:03.563 13:57:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:26:05.464 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:26:05.464 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:26:05.464 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:05.464 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:05.464 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:05.464 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: 2s 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: ]] 00:26:05.720 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzlkODZiZTFjMDUxY2RhNGEwZmVmZWJmYzEwMDJlMzFiZGUxMDE0Njk4MDk5MDRjEe2NXQ==: 00:26:05.721 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:26:05.721 13:57:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:26:07.651 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:26:07.651 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:26:07.651 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:07.651 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:07.651 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:07.651 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:07.651 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:26:07.651 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:07.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:07.910 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:07.910 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.910 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.910 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.910 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:07.910 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:07.910 13:57:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:08.477 nvme0n1 00:26:08.477 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:08.477 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.477 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:08.477 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.477 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:08.477 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:26:09.042 13:57:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:26:09.301 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:26:09.301 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:26:09.301 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:09.560 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.560 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:09.560 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.560 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:09.561 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:26:09.820 request: 00:26:09.820 { 00:26:09.820 "name": "nvme0", 00:26:09.820 "dhchap_key": "key1", 00:26:09.820 "dhchap_ctrlr_key": "key3", 00:26:09.820 "method": "bdev_nvme_set_keys", 00:26:09.820 "req_id": 1 00:26:09.820 } 00:26:09.820 Got JSON-RPC error response 00:26:09.820 response: 00:26:09.820 { 00:26:09.820 "code": -13, 00:26:09.820 "message": "Permission denied" 00:26:09.820 } 00:26:09.820 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:26:09.820 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:09.820 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:09.820 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:09.820 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:26:09.820 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:26:09.820 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:10.108 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:26:10.108 13:57:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:26:11.046 13:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:26:11.046 13:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:26:11.046 13:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:11.305 13:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:26:11.305 13:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:11.305 13:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.305 13:57:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.305 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.305 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:11.305 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:11.305 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:11.872 nvme0n1 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:26:11.872 13:57:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:26:12.439 request: 00:26:12.439 { 00:26:12.439 "name": "nvme0", 00:26:12.439 "dhchap_key": "key2", 00:26:12.439 "dhchap_ctrlr_key": "key0", 00:26:12.439 "method": "bdev_nvme_set_keys", 00:26:12.439 "req_id": 1 00:26:12.439 } 00:26:12.439 Got JSON-RPC error response 00:26:12.439 response: 00:26:12.439 { 00:26:12.439 "code": -13, 00:26:12.439 "message": "Permission denied" 00:26:12.439 } 00:26:12.440 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:26:12.440 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:12.440 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:12.440 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:12.440 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:26:12.440 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:12.440 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:26:12.698 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:26:12.698 13:57:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:26:13.634 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:26:13.634 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:26:13.634 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1762645 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1762645 ']' 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1762645 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762645 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762645' 00:26:13.893 killing process with pid 1762645 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1762645 00:26:13.893 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1762645 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:14.152 rmmod nvme_rdma 00:26:14.152 rmmod nvme_fabrics 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1786732 ']' 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1786732 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1786732 ']' 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1786732 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1786732 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1786732' 00:26:14.152 killing process with pid 1786732 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1786732 00:26:14.152 13:57:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1786732 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.dFq /tmp/spdk.key-sha256.LYl /tmp/spdk.key-sha384.dgD /tmp/spdk.key-sha512.WJO /tmp/spdk.key-sha512.SQp /tmp/spdk.key-sha384.GcJ /tmp/spdk.key-sha256.NtL '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:26:14.411 00:26:14.411 real 2m34.619s 00:26:14.411 user 5m56.766s 00:26:14.411 sys 0m20.641s 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.411 ************************************ 00:26:14.411 END TEST nvmf_auth_target 00:26:14.411 ************************************ 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:14.411 ************************************ 00:26:14.411 START TEST nvmf_fuzz 00:26:14.411 ************************************ 00:26:14.411 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:26:14.671 * Looking for test storage... 00:26:14.671 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.671 --rc genhtml_branch_coverage=1 00:26:14.671 --rc genhtml_function_coverage=1 00:26:14.671 --rc genhtml_legend=1 00:26:14.671 --rc geninfo_all_blocks=1 00:26:14.671 --rc geninfo_unexecuted_blocks=1 00:26:14.671 00:26:14.671 ' 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.671 --rc genhtml_branch_coverage=1 00:26:14.671 --rc genhtml_function_coverage=1 00:26:14.671 --rc genhtml_legend=1 00:26:14.671 --rc geninfo_all_blocks=1 00:26:14.671 --rc geninfo_unexecuted_blocks=1 00:26:14.671 00:26:14.671 ' 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.671 --rc genhtml_branch_coverage=1 00:26:14.671 --rc genhtml_function_coverage=1 00:26:14.671 --rc genhtml_legend=1 00:26:14.671 --rc geninfo_all_blocks=1 00:26:14.671 --rc geninfo_unexecuted_blocks=1 00:26:14.671 00:26:14.671 ' 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.671 --rc genhtml_branch_coverage=1 00:26:14.671 --rc genhtml_function_coverage=1 00:26:14.671 --rc genhtml_legend=1 00:26:14.671 --rc geninfo_all_blocks=1 00:26:14.671 --rc geninfo_unexecuted_blocks=1 00:26:14.671 00:26:14.671 ' 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.671 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.672 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.672 13:57:14 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:21.268 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:26:21.269 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:26:21.269 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:26:21.269 Found net devices under 0000:18:00.0: mlx_0_0 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:26:21.269 Found net devices under 0000:18:00.1: mlx_0_1 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:21.269 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:21.269 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:26:21.269 altname enp24s0f0np0 00:26:21.269 altname ens785f0np0 00:26:21.269 inet 192.168.100.8/24 scope global mlx_0_0 00:26:21.269 valid_lft forever preferred_lft forever 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:21.269 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:21.269 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:26:21.269 altname enp24s0f1np1 00:26:21.269 altname ens785f1np1 00:26:21.269 inet 192.168.100.9/24 scope global mlx_0_1 00:26:21.269 valid_lft forever preferred_lft forever 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:21.269 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:21.270 192.168.100.9' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:21.270 192.168.100.9' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:21.270 192.168.100.9' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1794172 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1794172 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1794172 ']' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 Malloc0 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:26:21.270 13:57:20 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:26:53.345 Fuzzing completed. Shutting down the fuzz application 00:26:53.345 00:26:53.345 Dumping successful admin opcodes: 00:26:53.345 9, 10, 00:26:53.345 Dumping successful io opcodes: 00:26:53.345 0, 9, 00:26:53.345 NS: 0x2000008f1f00 I/O qp, Total commands completed: 1433387, total successful commands: 8459, random_seed: 763096832 00:26:53.345 NS: 0x2000008f1f00 admin qp, Total commands completed: 219504, total successful commands: 50, random_seed: 1054204416 00:26:53.345 13:57:51 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:53.345 Fuzzing completed. Shutting down the fuzz application 00:26:53.345 00:26:53.345 Dumping successful admin opcodes: 00:26:53.345 00:26:53.345 Dumping successful io opcodes: 00:26:53.345 00:26:53.345 NS: 0x2000008f1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3087550230 00:26:53.345 NS: 0x2000008f1f00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 3087615272 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:53.345 rmmod nvme_rdma 00:26:53.345 rmmod nvme_fabrics 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1794172 ']' 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1794172 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1794172 ']' 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1794172 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1794172 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1794172' 00:26:53.345 killing process with pid 1794172 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1794172 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1794172 00:26:53.345 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:53.346 00:26:53.346 real 0m38.506s 00:26:53.346 user 0m52.522s 00:26:53.346 sys 0m17.167s 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:53.346 ************************************ 00:26:53.346 END TEST nvmf_fuzz 00:26:53.346 ************************************ 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:53.346 ************************************ 00:26:53.346 START TEST nvmf_multiconnection 00:26:53.346 ************************************ 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:26:53.346 * Looking for test storage... 00:26:53.346 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:53.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.346 --rc genhtml_branch_coverage=1 00:26:53.346 --rc genhtml_function_coverage=1 00:26:53.346 --rc genhtml_legend=1 00:26:53.346 --rc geninfo_all_blocks=1 00:26:53.346 --rc geninfo_unexecuted_blocks=1 00:26:53.346 00:26:53.346 ' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:53.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.346 --rc genhtml_branch_coverage=1 00:26:53.346 --rc genhtml_function_coverage=1 00:26:53.346 --rc genhtml_legend=1 00:26:53.346 --rc geninfo_all_blocks=1 00:26:53.346 --rc geninfo_unexecuted_blocks=1 00:26:53.346 00:26:53.346 ' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:53.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.346 --rc genhtml_branch_coverage=1 00:26:53.346 --rc genhtml_function_coverage=1 00:26:53.346 --rc genhtml_legend=1 00:26:53.346 --rc geninfo_all_blocks=1 00:26:53.346 --rc geninfo_unexecuted_blocks=1 00:26:53.346 00:26:53.346 ' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:53.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.346 --rc genhtml_branch_coverage=1 00:26:53.346 --rc genhtml_function_coverage=1 00:26:53.346 --rc genhtml_legend=1 00:26:53.346 --rc geninfo_all_blocks=1 00:26:53.346 --rc geninfo_unexecuted_blocks=1 00:26:53.346 00:26:53.346 ' 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.346 13:57:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:53.347 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:53.347 13:57:53 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:59.921 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:26:59.922 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:26:59.922 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:26:59.922 Found net devices under 0000:18:00.0: mlx_0_0 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:26:59.922 Found net devices under 0000:18:00.1: mlx_0_1 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:59.922 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:59.922 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:26:59.922 altname enp24s0f0np0 00:26:59.922 altname ens785f0np0 00:26:59.922 inet 192.168.100.8/24 scope global mlx_0_0 00:26:59.922 valid_lft forever preferred_lft forever 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:59.922 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:59.923 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:59.923 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:26:59.923 altname enp24s0f1np1 00:26:59.923 altname ens785f1np1 00:26:59.923 inet 192.168.100.9/24 scope global mlx_0_1 00:26:59.923 valid_lft forever preferred_lft forever 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:59.923 13:57:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:59.923 192.168.100.9' 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:59.923 192.168.100.9' 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:59.923 192.168.100.9' 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1803181 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1803181 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1803181 ']' 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 [2024-12-05 13:57:59.095641] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:26:59.923 [2024-12-05 13:57:59.095692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.923 [2024-12-05 13:57:59.169497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.923 [2024-12-05 13:57:59.193382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.923 [2024-12-05 13:57:59.193421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.923 [2024-12-05 13:57:59.193428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.923 [2024-12-05 13:57:59.193433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.923 [2024-12-05 13:57:59.193438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.923 [2024-12-05 13:57:59.194653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.923 [2024-12-05 13:57:59.194766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.923 [2024-12-05 13:57:59.194891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.923 [2024-12-05 13:57:59.194893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.923 [2024-12-05 13:57:59.340441] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a78f30/0x1a7d420) succeed. 00:26:59.923 [2024-12-05 13:57:59.349137] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a7a5c0/0x1abeac0) succeed. 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:59.923 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 Malloc1 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 [2024-12-05 13:57:59.519557] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 Malloc2 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 Malloc3 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 Malloc4 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 Malloc5 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.924 Malloc6 00:26:59.924 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.925 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 Malloc7 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 Malloc8 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 Malloc9 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 Malloc10 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 Malloc11 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:00.184 13:57:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:27:01.117 13:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:01.117 13:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:01.117 13:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.117 13:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:01.117 13:58:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:03.695 13:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:03.695 13:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:27:03.695 13:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:03.695 13:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:03.695 13:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.695 13:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:03.695 13:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.695 13:58:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:27:04.265 13:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:04.265 13:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:04.265 13:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.265 13:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:04.265 13:58:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:06.168 13:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:06.168 13:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:06.168 13:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:27:06.168 13:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:06.168 13:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.168 13:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:06.168 13:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.168 13:58:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:27:07.546 13:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:07.546 13:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:07.546 13:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:07.546 13:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:07.546 13:58:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:09.543 13:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:09.543 13:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:09.543 13:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:27:09.543 13:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:09.543 13:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:09.543 13:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:09.543 13:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.543 13:58:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:27:10.111 13:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:10.111 13:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:10.111 13:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:10.111 13:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:10.111 13:58:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:12.645 13:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:12.645 13:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:12.645 13:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:27:12.645 13:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:12.645 13:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:12.645 13:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:12.645 13:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.645 13:58:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:27:13.212 13:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:13.213 13:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:13.213 13:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:13.213 13:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:13.213 13:58:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:15.117 13:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:15.117 13:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:15.117 13:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:27:15.376 13:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:15.376 13:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:15.376 13:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:15.376 13:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:15.376 13:58:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:27:16.310 13:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:16.310 13:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:16.310 13:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:16.310 13:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:16.311 13:58:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:18.213 13:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:18.213 13:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:18.213 13:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:27:18.213 13:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:18.213 13:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:18.213 13:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:18.213 13:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:18.213 13:58:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:27:19.150 13:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:19.150 13:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:19.150 13:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:19.150 13:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:19.150 13:58:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:21.681 13:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:21.681 13:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:21.681 13:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:27:21.681 13:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:21.681 13:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:21.681 13:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:21.681 13:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:21.681 13:58:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:27:22.245 13:58:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:22.245 13:58:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:22.245 13:58:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:22.245 13:58:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:22.245 13:58:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:24.147 13:58:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:24.147 13:58:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:24.147 13:58:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:27:24.147 13:58:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:24.147 13:58:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:24.147 13:58:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:24.147 13:58:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.147 13:58:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:27:25.521 13:58:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:25.521 13:58:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:25.521 13:58:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:25.521 13:58:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:25.521 13:58:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:27.419 13:58:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:27.419 13:58:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:27.419 13:58:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:27:27.419 13:58:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:27.419 13:58:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:27.419 13:58:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:27.419 13:58:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.419 13:58:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:27:28.354 13:58:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:28.354 13:58:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:28.354 13:58:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:28.354 13:58:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:28.354 13:58:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:30.259 13:58:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:30.259 13:58:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:30.259 13:58:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:27:30.259 13:58:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:30.259 13:58:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:30.259 13:58:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:30.259 13:58:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.259 13:58:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:27:31.197 13:58:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:31.197 13:58:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:31.197 13:58:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:31.197 13:58:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:31.197 13:58:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:33.732 13:58:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:33.732 13:58:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:33.732 13:58:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:27:33.732 13:58:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:33.732 13:58:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:33.732 13:58:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:33.732 13:58:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:33.732 [global] 00:27:33.732 thread=1 00:27:33.732 invalidate=1 00:27:33.732 rw=read 00:27:33.732 time_based=1 00:27:33.732 runtime=10 00:27:33.732 ioengine=libaio 00:27:33.732 direct=1 00:27:33.732 bs=262144 00:27:33.732 iodepth=64 00:27:33.732 norandommap=1 00:27:33.732 numjobs=1 00:27:33.732 00:27:33.732 [job0] 00:27:33.732 filename=/dev/nvme0n1 00:27:33.732 [job1] 00:27:33.732 filename=/dev/nvme10n1 00:27:33.732 [job2] 00:27:33.732 filename=/dev/nvme1n1 00:27:33.732 [job3] 00:27:33.732 filename=/dev/nvme2n1 00:27:33.732 [job4] 00:27:33.732 filename=/dev/nvme3n1 00:27:33.732 [job5] 00:27:33.732 filename=/dev/nvme4n1 00:27:33.732 [job6] 00:27:33.732 filename=/dev/nvme5n1 00:27:33.732 [job7] 00:27:33.732 filename=/dev/nvme6n1 00:27:33.732 [job8] 00:27:33.732 filename=/dev/nvme7n1 00:27:33.732 [job9] 00:27:33.732 filename=/dev/nvme8n1 00:27:33.732 [job10] 00:27:33.732 filename=/dev/nvme9n1 00:27:33.732 Could not set queue depth (nvme0n1) 00:27:33.732 Could not set queue depth (nvme10n1) 00:27:33.732 Could not set queue depth (nvme1n1) 00:27:33.732 Could not set queue depth (nvme2n1) 00:27:33.732 Could not set queue depth (nvme3n1) 00:27:33.732 Could not set queue depth (nvme4n1) 00:27:33.732 Could not set queue depth (nvme5n1) 00:27:33.732 Could not set queue depth (nvme6n1) 00:27:33.732 Could not set queue depth (nvme7n1) 00:27:33.732 Could not set queue depth (nvme8n1) 00:27:33.732 Could not set queue depth (nvme9n1) 00:27:33.732 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:33.732 fio-3.35 00:27:33.732 Starting 11 threads 00:27:45.943 00:27:45.943 job0: (groupid=0, jobs=1): err= 0: pid=1809629: Thu Dec 5 13:58:43 2024 00:27:45.943 read: IOPS=1316, BW=329MiB/s (345MB/s)(3299MiB/10024msec) 00:27:45.943 slat (usec): min=8, max=55736, avg=718.16, stdev=2138.93 00:27:45.943 clat (msec): min=12, max=122, avg=47.85, stdev=16.59 00:27:45.943 lat (msec): min=12, max=146, avg=48.57, stdev=16.92 00:27:45.943 clat percentiles (msec): 00:27:45.943 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 32], 00:27:45.943 | 30.00th=[ 36], 40.00th=[ 44], 50.00th=[ 46], 60.00th=[ 48], 00:27:45.943 | 70.00th=[ 54], 80.00th=[ 61], 90.00th=[ 73], 95.00th=[ 84], 00:27:45.943 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 106], 99.95th=[ 114], 00:27:45.943 | 99.99th=[ 123] 00:27:45.943 bw ( KiB/s): min=182784, max=505856, per=8.23%, avg=336152.15, stdev=91999.04, samples=20 00:27:45.943 iops : min= 714, max= 1976, avg=1313.00, stdev=359.44, samples=20 00:27:45.943 lat (msec) : 20=0.87%, 50=63.98%, 100=34.99%, 250=0.16% 00:27:45.943 cpu : usr=0.26%, sys=3.64%, ctx=3158, majf=0, minf=4097 00:27:45.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:27:45.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.943 issued rwts: total=13195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.943 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.943 job1: (groupid=0, jobs=1): err= 0: pid=1809630: Thu Dec 5 13:58:43 2024 00:27:45.943 read: IOPS=1761, BW=440MiB/s (462MB/s)(4428MiB/10054msec) 00:27:45.943 slat (usec): min=7, max=35595, avg=490.79, stdev=1496.17 00:27:45.943 clat (msec): min=8, max=121, avg=35.80, stdev=21.23 00:27:45.943 lat (msec): min=8, max=124, avg=36.29, stdev=21.49 00:27:45.943 clat percentiles (msec): 00:27:45.943 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:27:45.943 | 30.00th=[ 20], 40.00th=[ 28], 50.00th=[ 31], 60.00th=[ 36], 00:27:45.943 | 70.00th=[ 45], 80.00th=[ 49], 90.00th=[ 66], 95.00th=[ 80], 00:27:45.943 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 115], 99.95th=[ 120], 00:27:45.943 | 99.99th=[ 122] 00:27:45.943 bw ( KiB/s): min=200704, max=877568, per=11.05%, avg=451681.95, stdev=225897.91, samples=20 00:27:45.943 iops : min= 784, max= 3428, avg=1764.30, stdev=882.38, samples=20 00:27:45.943 lat (msec) : 10=0.10%, 20=30.14%, 50=51.04%, 100=17.84%, 250=0.88% 00:27:45.943 cpu : usr=0.36%, sys=3.94%, ctx=4449, majf=0, minf=3659 00:27:45.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:45.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.943 issued rwts: total=17712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.943 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.943 job2: (groupid=0, jobs=1): err= 0: pid=1809631: Thu Dec 5 13:58:43 2024 00:27:45.943 read: IOPS=1836, BW=459MiB/s (481MB/s)(4608MiB/10037msec) 00:27:45.943 slat (usec): min=7, max=34387, avg=487.43, stdev=1436.41 00:27:45.943 clat (usec): min=794, max=110581, avg=34329.16, stdev=14906.97 00:27:45.943 lat (usec): min=847, max=119234, avg=34816.59, stdev=15148.92 00:27:45.943 clat percentiles (msec): 00:27:45.943 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 23], 00:27:45.943 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:27:45.943 | 70.00th=[ 40], 80.00th=[ 46], 90.00th=[ 57], 95.00th=[ 62], 00:27:45.943 | 99.00th=[ 73], 99.50th=[ 96], 99.90th=[ 106], 99.95th=[ 108], 00:27:45.943 | 99.99th=[ 111] 00:27:45.943 bw ( KiB/s): min=285125, max=709120, per=11.50%, avg=470150.25, stdev=137631.24, samples=20 00:27:45.943 iops : min= 1113, max= 2770, avg=1836.45, stdev=537.69, samples=20 00:27:45.943 lat (usec) : 1000=0.01% 00:27:45.943 lat (msec) : 2=0.36%, 4=0.34%, 10=1.50%, 20=14.90%, 50=69.10% 00:27:45.943 lat (msec) : 100=13.41%, 250=0.38% 00:27:45.943 cpu : usr=0.24%, sys=4.24%, ctx=5293, majf=0, minf=4097 00:27:45.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:45.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.943 issued rwts: total=18431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.943 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.943 job3: (groupid=0, jobs=1): err= 0: pid=1809632: Thu Dec 5 13:58:43 2024 00:27:45.943 read: IOPS=1036, BW=259MiB/s (272MB/s)(2604MiB/10051msec) 00:27:45.943 slat (usec): min=8, max=61148, avg=829.72, stdev=2903.44 00:27:45.943 clat (msec): min=12, max=156, avg=60.87, stdev=19.33 00:27:45.943 lat (msec): min=12, max=156, avg=61.70, stdev=19.79 00:27:45.943 clat percentiles (msec): 00:27:45.943 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 37], 20.00th=[ 45], 00:27:45.943 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 60], 60.00th=[ 65], 00:27:45.943 | 70.00th=[ 74], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 93], 00:27:45.943 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 126], 99.95th=[ 126], 00:27:45.943 | 99.99th=[ 155] 00:27:45.943 bw ( KiB/s): min=186368, max=407040, per=6.49%, avg=265115.10, stdev=66302.37, samples=20 00:27:45.943 iops : min= 728, max= 1590, avg=1035.55, stdev=259.00, samples=20 00:27:45.943 lat (msec) : 20=0.89%, 50=35.16%, 100=62.45%, 250=1.50% 00:27:45.943 cpu : usr=0.31%, sys=3.44%, ctx=2997, majf=0, minf=4097 00:27:45.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:45.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.943 issued rwts: total=10417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.943 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.943 job4: (groupid=0, jobs=1): err= 0: pid=1809633: Thu Dec 5 13:58:43 2024 00:27:45.943 read: IOPS=983, BW=246MiB/s (258MB/s)(2472MiB/10054msec) 00:27:45.943 slat (usec): min=8, max=60093, avg=860.49, stdev=3467.70 00:27:45.943 clat (msec): min=11, max=145, avg=64.14, stdev=19.43 00:27:45.943 lat (msec): min=11, max=145, avg=65.00, stdev=19.92 00:27:45.943 clat percentiles (msec): 00:27:45.943 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 47], 00:27:45.943 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:27:45.943 | 70.00th=[ 77], 80.00th=[ 80], 90.00th=[ 89], 95.00th=[ 93], 00:27:45.943 | 99.00th=[ 107], 99.50th=[ 113], 99.90th=[ 138], 99.95th=[ 144], 00:27:45.943 | 99.99th=[ 146] 00:27:45.943 bw ( KiB/s): min=184320, max=495616, per=6.15%, avg=251492.10, stdev=68847.95, samples=20 00:27:45.943 iops : min= 720, max= 1936, avg=982.35, stdev=268.92, samples=20 00:27:45.943 lat (msec) : 20=0.76%, 50=24.99%, 100=72.23%, 250=2.02% 00:27:45.943 cpu : usr=0.27%, sys=3.12%, ctx=2849, majf=0, minf=4097 00:27:45.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:45.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.943 issued rwts: total=9889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.943 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.943 job5: (groupid=0, jobs=1): err= 0: pid=1809634: Thu Dec 5 13:58:43 2024 00:27:45.943 read: IOPS=1785, BW=446MiB/s (468MB/s)(4474MiB/10024msec) 00:27:45.943 slat (usec): min=7, max=34187, avg=495.44, stdev=1464.31 00:27:45.943 clat (usec): min=849, max=90603, avg=35316.07, stdev=13936.67 00:27:45.943 lat (usec): min=889, max=90642, avg=35811.51, stdev=14203.63 00:27:45.943 clat percentiles (usec): 00:27:45.943 | 1.00th=[ 3064], 5.00th=[14091], 10.00th=[16057], 20.00th=[27132], 00:27:45.943 | 30.00th=[29492], 40.00th=[30540], 50.00th=[31589], 60.00th=[35390], 00:27:45.943 | 70.00th=[43779], 80.00th=[46924], 90.00th=[55837], 95.00th=[60556], 00:27:45.943 | 99.00th=[67634], 99.50th=[69731], 99.90th=[72877], 99.95th=[76022], 00:27:45.943 | 99.99th=[85459] 00:27:45.943 bw ( KiB/s): min=285696, max=908288, per=11.17%, avg=456436.90, stdev=144327.27, samples=20 00:27:45.943 iops : min= 1116, max= 3548, avg=1782.95, stdev=563.77, samples=20 00:27:45.943 lat (usec) : 1000=0.03% 00:27:45.943 lat (msec) : 2=0.45%, 4=0.92%, 10=1.75%, 20=10.06%, 50=71.98% 00:27:45.943 lat (msec) : 100=14.81% 00:27:45.943 cpu : usr=0.42%, sys=4.57%, ctx=5337, majf=0, minf=4097 00:27:45.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:45.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.943 issued rwts: total=17897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.943 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.944 job6: (groupid=0, jobs=1): err= 0: pid=1809635: Thu Dec 5 13:58:43 2024 00:27:45.944 read: IOPS=2089, BW=522MiB/s (548MB/s)(5242MiB/10035msec) 00:27:45.944 slat (usec): min=7, max=42052, avg=454.22, stdev=1472.61 00:27:45.944 clat (usec): min=619, max=102681, avg=30144.41, stdev=16765.48 00:27:45.944 lat (usec): min=652, max=105734, avg=30598.63, stdev=17054.46 00:27:45.944 clat percentiles (msec): 00:27:45.944 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 16], 00:27:45.944 | 30.00th=[ 17], 40.00th=[ 20], 50.00th=[ 29], 60.00th=[ 31], 00:27:45.944 | 70.00th=[ 33], 80.00th=[ 45], 90.00th=[ 57], 95.00th=[ 66], 00:27:45.944 | 99.00th=[ 77], 99.50th=[ 79], 99.90th=[ 85], 99.95th=[ 91], 00:27:45.944 | 99.99th=[ 103] 00:27:45.944 bw ( KiB/s): min=236032, max=998912, per=13.09%, avg=535081.25, stdev=230349.59, samples=20 00:27:45.944 iops : min= 922, max= 3902, avg=2090.15, stdev=899.80, samples=20 00:27:45.944 lat (usec) : 750=0.03%, 1000=0.03% 00:27:45.944 lat (msec) : 2=0.15%, 4=0.27%, 10=0.99%, 20=39.40%, 50=45.13% 00:27:45.944 lat (msec) : 100=13.98%, 250=0.03% 00:27:45.944 cpu : usr=0.35%, sys=4.15%, ctx=5053, majf=0, minf=4097 00:27:45.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:45.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.944 issued rwts: total=20968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.944 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.944 job7: (groupid=0, jobs=1): err= 0: pid=1809636: Thu Dec 5 13:58:43 2024 00:27:45.944 read: IOPS=984, BW=246MiB/s (258MB/s)(2474MiB/10053msec) 00:27:45.944 slat (usec): min=8, max=42480, avg=964.30, stdev=2881.18 00:27:45.944 clat (msec): min=11, max=126, avg=63.99, stdev=18.10 00:27:45.944 lat (msec): min=11, max=126, avg=64.95, stdev=18.54 00:27:45.944 clat percentiles (msec): 00:27:45.944 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 47], 00:27:45.944 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 70], 00:27:45.944 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 93], 00:27:45.944 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 123], 99.95th=[ 124], 00:27:45.944 | 99.99th=[ 127] 00:27:45.944 bw ( KiB/s): min=182784, max=367104, per=6.16%, avg=251642.90, stdev=56540.24, samples=20 00:27:45.944 iops : min= 714, max= 1434, avg=982.90, stdev=220.82, samples=20 00:27:45.944 lat (msec) : 20=0.84%, 50=26.09%, 100=71.25%, 250=1.82% 00:27:45.944 cpu : usr=0.22%, sys=3.14%, ctx=2254, majf=0, minf=4097 00:27:45.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:45.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.944 issued rwts: total=9895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.944 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.944 job8: (groupid=0, jobs=1): err= 0: pid=1809637: Thu Dec 5 13:58:43 2024 00:27:45.944 read: IOPS=1009, BW=252MiB/s (265MB/s)(2537MiB/10051msec) 00:27:45.944 slat (usec): min=8, max=55348, avg=819.75, stdev=2923.07 00:27:45.944 clat (usec): min=1115, max=147432, avg=62521.73, stdev=19331.72 00:27:45.944 lat (usec): min=1156, max=147448, avg=63341.48, stdev=19762.83 00:27:45.944 clat percentiles (msec): 00:27:45.944 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 46], 00:27:45.944 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 70], 00:27:45.944 | 70.00th=[ 75], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 92], 00:27:45.944 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 127], 99.95th=[ 134], 00:27:45.944 | 99.99th=[ 148] 00:27:45.944 bw ( KiB/s): min=193024, max=398621, per=6.32%, avg=258164.65, stdev=50112.21, samples=20 00:27:45.944 iops : min= 754, max= 1557, avg=1008.45, stdev=195.73, samples=20 00:27:45.944 lat (msec) : 2=0.10%, 20=1.51%, 50=25.37%, 100=71.56%, 250=1.47% 00:27:45.944 cpu : usr=0.31%, sys=3.29%, ctx=3241, majf=0, minf=4097 00:27:45.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:45.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.944 issued rwts: total=10147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.944 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.944 job9: (groupid=0, jobs=1): err= 0: pid=1809638: Thu Dec 5 13:58:43 2024 00:27:45.944 read: IOPS=1878, BW=470MiB/s (492MB/s)(4707MiB/10023msec) 00:27:45.944 slat (usec): min=7, max=34087, avg=499.72, stdev=1696.19 00:27:45.944 clat (msec): min=9, max=115, avg=33.54, stdev=22.20 00:27:45.944 lat (msec): min=9, max=122, avg=34.04, stdev=22.55 00:27:45.944 clat percentiles (msec): 00:27:45.944 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:27:45.944 | 30.00th=[ 15], 40.00th=[ 19], 50.00th=[ 29], 60.00th=[ 32], 00:27:45.944 | 70.00th=[ 41], 80.00th=[ 54], 90.00th=[ 71], 95.00th=[ 81], 00:27:45.944 | 99.00th=[ 91], 99.50th=[ 93], 99.90th=[ 100], 99.95th=[ 104], 00:27:45.944 | 99.99th=[ 116] 00:27:45.944 bw ( KiB/s): min=192000, max=938496, per=11.75%, avg=480338.80, stdev=274003.40, samples=20 00:27:45.944 iops : min= 750, max= 3666, avg=1876.25, stdev=1070.38, samples=20 00:27:45.944 lat (msec) : 10=0.04%, 20=41.37%, 50=37.08%, 100=21.42%, 250=0.10% 00:27:45.944 cpu : usr=0.32%, sys=4.16%, ctx=4148, majf=0, minf=4097 00:27:45.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:45.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.944 issued rwts: total=18827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.944 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.944 job10: (groupid=0, jobs=1): err= 0: pid=1809639: Thu Dec 5 13:58:43 2024 00:27:45.944 read: IOPS=1307, BW=327MiB/s (343MB/s)(3280MiB/10034msec) 00:27:45.944 slat (usec): min=7, max=46681, avg=636.38, stdev=2496.59 00:27:45.944 clat (usec): min=772, max=148034, avg=48262.43, stdev=20713.64 00:27:45.944 lat (usec): min=802, max=148094, avg=48898.82, stdev=21139.42 00:27:45.944 clat percentiles (msec): 00:27:45.944 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 27], 20.00th=[ 32], 00:27:45.944 | 30.00th=[ 36], 40.00th=[ 42], 50.00th=[ 45], 60.00th=[ 50], 00:27:45.944 | 70.00th=[ 59], 80.00th=[ 66], 90.00th=[ 77], 95.00th=[ 88], 00:27:45.944 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 129], 99.95th=[ 136], 00:27:45.944 | 99.99th=[ 142] 00:27:45.944 bw ( KiB/s): min=165376, max=550400, per=8.18%, avg=334318.10, stdev=100747.58, samples=20 00:27:45.944 iops : min= 646, max= 2150, avg=1305.90, stdev=393.55, samples=20 00:27:45.944 lat (usec) : 1000=0.02% 00:27:45.944 lat (msec) : 2=0.40%, 4=0.61%, 10=1.38%, 20=3.72%, 50=55.38% 00:27:45.944 lat (msec) : 100=37.34%, 250=1.15% 00:27:45.944 cpu : usr=0.27%, sys=3.67%, ctx=4543, majf=0, minf=4097 00:27:45.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:27:45.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:45.944 issued rwts: total=13121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.944 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:45.944 00:27:45.944 Run status group 0 (all jobs): 00:27:45.944 READ: bw=3991MiB/s (4185MB/s), 246MiB/s-522MiB/s (258MB/s-548MB/s), io=39.2GiB (42.1GB), run=10023-10054msec 00:27:45.944 00:27:45.944 Disk stats (read/write): 00:27:45.944 nvme0n1: ios=25750/0, merge=0/0, ticks=1218317/0, in_queue=1218317, util=96.59% 00:27:45.944 nvme10n1: ios=35116/0, merge=0/0, ticks=1220212/0, in_queue=1220212, util=96.86% 00:27:45.944 nvme1n1: ios=36219/0, merge=0/0, ticks=1220220/0, in_queue=1220220, util=97.22% 00:27:45.944 nvme2n1: ios=20507/0, merge=0/0, ticks=1222439/0, in_queue=1222439, util=97.39% 00:27:45.944 nvme3n1: ios=19476/0, merge=0/0, ticks=1223730/0, in_queue=1223730, util=97.51% 00:27:45.944 nvme4n1: ios=35250/0, merge=0/0, ticks=1220243/0, in_queue=1220243, util=97.93% 00:27:45.944 nvme5n1: ios=41291/0, merge=0/0, ticks=1216041/0, in_queue=1216041, util=98.14% 00:27:45.944 nvme6n1: ios=19459/0, merge=0/0, ticks=1218491/0, in_queue=1218491, util=98.28% 00:27:45.944 nvme7n1: ios=19933/0, merge=0/0, ticks=1222412/0, in_queue=1222412, util=98.77% 00:27:45.944 nvme8n1: ios=36830/0, merge=0/0, ticks=1217975/0, in_queue=1217975, util=99.03% 00:27:45.944 nvme9n1: ios=25724/0, merge=0/0, ticks=1222734/0, in_queue=1222734, util=99.19% 00:27:45.944 13:58:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:45.944 [global] 00:27:45.944 thread=1 00:27:45.944 invalidate=1 00:27:45.944 rw=randwrite 00:27:45.944 time_based=1 00:27:45.944 runtime=10 00:27:45.944 ioengine=libaio 00:27:45.944 direct=1 00:27:45.944 bs=262144 00:27:45.944 iodepth=64 00:27:45.944 norandommap=1 00:27:45.944 numjobs=1 00:27:45.944 00:27:45.944 [job0] 00:27:45.944 filename=/dev/nvme0n1 00:27:45.944 [job1] 00:27:45.944 filename=/dev/nvme10n1 00:27:45.944 [job2] 00:27:45.944 filename=/dev/nvme1n1 00:27:45.944 [job3] 00:27:45.944 filename=/dev/nvme2n1 00:27:45.944 [job4] 00:27:45.944 filename=/dev/nvme3n1 00:27:45.944 [job5] 00:27:45.944 filename=/dev/nvme4n1 00:27:45.944 [job6] 00:27:45.944 filename=/dev/nvme5n1 00:27:45.944 [job7] 00:27:45.944 filename=/dev/nvme6n1 00:27:45.944 [job8] 00:27:45.944 filename=/dev/nvme7n1 00:27:45.944 [job9] 00:27:45.944 filename=/dev/nvme8n1 00:27:45.944 [job10] 00:27:45.944 filename=/dev/nvme9n1 00:27:45.944 Could not set queue depth (nvme0n1) 00:27:45.944 Could not set queue depth (nvme10n1) 00:27:45.944 Could not set queue depth (nvme1n1) 00:27:45.944 Could not set queue depth (nvme2n1) 00:27:45.944 Could not set queue depth (nvme3n1) 00:27:45.944 Could not set queue depth (nvme4n1) 00:27:45.944 Could not set queue depth (nvme5n1) 00:27:45.944 Could not set queue depth (nvme6n1) 00:27:45.944 Could not set queue depth (nvme7n1) 00:27:45.944 Could not set queue depth (nvme8n1) 00:27:45.944 Could not set queue depth (nvme9n1) 00:27:45.944 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.944 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.944 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:45.945 fio-3.35 00:27:45.945 Starting 11 threads 00:27:55.933 00:27:55.933 job0: (groupid=0, jobs=1): err= 0: pid=1811447: Thu Dec 5 13:58:54 2024 00:27:55.933 write: IOPS=900, BW=225MiB/s (236MB/s)(2268MiB/10070msec); 0 zone resets 00:27:55.933 slat (usec): min=16, max=97793, avg=976.60, stdev=3506.07 00:27:55.933 clat (usec): min=370, max=244821, avg=70055.97, stdev=54977.09 00:27:55.933 lat (usec): min=437, max=244871, avg=71032.57, stdev=55743.04 00:27:55.933 clat percentiles (usec): 00:27:55.933 | 1.00th=[ 1909], 5.00th=[ 6652], 10.00th=[ 12387], 20.00th=[ 26346], 00:27:55.933 | 30.00th=[ 28443], 40.00th=[ 31327], 50.00th=[ 36439], 60.00th=[ 81265], 00:27:55.933 | 70.00th=[119014], 80.00th=[129500], 90.00th=[143655], 95.00th=[156238], 00:27:55.933 | 99.00th=[191890], 99.50th=[208667], 99.90th=[233833], 99.95th=[235930], 00:27:55.933 | 99.99th=[244319] 00:27:55.933 bw ( KiB/s): min=106496, max=572928, per=7.35%, avg=230579.20, stdev=157128.50, samples=20 00:27:55.934 iops : min= 416, max= 2238, avg=900.70, stdev=613.78, samples=20 00:27:55.934 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.08% 00:27:55.934 lat (msec) : 2=1.00%, 4=1.63%, 10=5.15%, 20=7.82%, 50=40.74% 00:27:55.934 lat (msec) : 100=4.58%, 250=38.94% 00:27:55.934 cpu : usr=1.66%, sys=2.77%, ctx=2927, majf=0, minf=1 00:27:55.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:55.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.934 issued rwts: total=0,9070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.934 job1: (groupid=0, jobs=1): err= 0: pid=1811480: Thu Dec 5 13:58:54 2024 00:27:55.934 write: IOPS=612, BW=153MiB/s (161MB/s)(1542MiB/10072msec); 0 zone resets 00:27:55.934 slat (usec): min=16, max=87916, avg=1463.98, stdev=4197.11 00:27:55.934 clat (usec): min=624, max=221166, avg=102989.57, stdev=51510.35 00:27:55.934 lat (usec): min=665, max=221208, avg=104453.54, stdev=52331.91 00:27:55.934 clat percentiles (usec): 00:27:55.934 | 1.00th=[ 1696], 5.00th=[ 8586], 10.00th=[ 18220], 20.00th=[ 28705], 00:27:55.934 | 30.00th=[ 95945], 40.00th=[117965], 50.00th=[123208], 60.00th=[130548], 00:27:55.934 | 70.00th=[137364], 80.00th=[143655], 90.00th=[149947], 95.00th=[158335], 00:27:55.934 | 99.00th=[179307], 99.50th=[185598], 99.90th=[208667], 99.95th=[221250], 00:27:55.934 | 99.99th=[221250] 00:27:55.934 bw ( KiB/s): min=109056, max=276480, per=4.98%, avg=156313.60, stdev=57561.36, samples=20 00:27:55.934 iops : min= 426, max= 1080, avg=610.60, stdev=224.85, samples=20 00:27:55.934 lat (usec) : 750=0.03%, 1000=0.24% 00:27:55.934 lat (msec) : 2=0.89%, 4=1.13%, 10=4.28%, 20=4.80%, 50=11.95% 00:27:55.934 lat (msec) : 100=7.34%, 250=69.33% 00:27:55.934 cpu : usr=1.16%, sys=2.01%, ctx=2131, majf=0, minf=1 00:27:55.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:55.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.934 issued rwts: total=0,6169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.934 job2: (groupid=0, jobs=1): err= 0: pid=1811494: Thu Dec 5 13:58:54 2024 00:27:55.934 write: IOPS=875, BW=219MiB/s (229MB/s)(2196MiB/10040msec); 0 zone resets 00:27:55.934 slat (usec): min=10, max=35193, avg=1095.95, stdev=2990.48 00:27:55.934 clat (usec): min=572, max=187101, avg=72024.08, stdev=54144.08 00:27:55.934 lat (usec): min=614, max=187153, avg=73120.03, stdev=54983.27 00:27:55.934 clat percentiles (usec): 00:27:55.934 | 1.00th=[ 1663], 5.00th=[ 6652], 10.00th=[ 12125], 20.00th=[ 17171], 00:27:55.934 | 30.00th=[ 31065], 40.00th=[ 36963], 50.00th=[ 46400], 60.00th=[ 79168], 00:27:55.934 | 70.00th=[126354], 80.00th=[135267], 90.00th=[145753], 95.00th=[152044], 00:27:55.934 | 99.00th=[162530], 99.50th=[168821], 99.90th=[183501], 99.95th=[187696], 00:27:55.934 | 99.99th=[187696] 00:27:55.934 bw ( KiB/s): min=108544, max=875008, per=7.12%, avg=223257.60, stdev=200851.23, samples=20 00:27:55.934 iops : min= 424, max= 3418, avg=872.10, stdev=784.58, samples=20 00:27:55.934 lat (usec) : 750=0.02%, 1000=0.15% 00:27:55.934 lat (msec) : 2=1.18%, 4=1.48%, 10=5.13%, 20=14.24%, 50=33.20% 00:27:55.934 lat (msec) : 100=6.01%, 250=38.58% 00:27:55.934 cpu : usr=1.65%, sys=2.36%, ctx=2487, majf=0, minf=1 00:27:55.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:55.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.934 issued rwts: total=0,8785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.934 job3: (groupid=0, jobs=1): err= 0: pid=1811502: Thu Dec 5 13:58:54 2024 00:27:55.934 write: IOPS=497, BW=124MiB/s (130MB/s)(1252MiB/10072msec); 0 zone resets 00:27:55.934 slat (usec): min=23, max=38142, avg=1993.76, stdev=3939.25 00:27:55.934 clat (msec): min=19, max=186, avg=126.67, stdev=22.95 00:27:55.934 lat (msec): min=19, max=193, avg=128.66, stdev=23.23 00:27:55.934 clat percentiles (msec): 00:27:55.934 | 1.00th=[ 54], 5.00th=[ 74], 10.00th=[ 106], 20.00th=[ 117], 00:27:55.934 | 30.00th=[ 121], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 134], 00:27:55.934 | 70.00th=[ 138], 80.00th=[ 144], 90.00th=[ 150], 95.00th=[ 157], 00:27:55.934 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 184], 99.95th=[ 184], 00:27:55.934 | 99.99th=[ 186] 00:27:55.934 bw ( KiB/s): min=106496, max=216064, per=4.04%, avg=126592.00, stdev=23536.69, samples=20 00:27:55.934 iops : min= 416, max= 844, avg=494.50, stdev=91.94, samples=20 00:27:55.934 lat (msec) : 20=0.06%, 50=0.22%, 100=8.97%, 250=90.75% 00:27:55.934 cpu : usr=1.00%, sys=1.67%, ctx=1235, majf=0, minf=1 00:27:55.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:27:55.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.934 issued rwts: total=0,5008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.934 job4: (groupid=0, jobs=1): err= 0: pid=1811506: Thu Dec 5 13:58:54 2024 00:27:55.934 write: IOPS=1286, BW=322MiB/s (337MB/s)(3224MiB/10024msec); 0 zone resets 00:27:55.934 slat (usec): min=18, max=49637, avg=763.98, stdev=2301.29 00:27:55.934 clat (usec): min=1681, max=176640, avg=48972.19, stdev=44551.27 00:27:55.934 lat (usec): min=1745, max=187861, avg=49736.16, stdev=45262.31 00:27:55.934 clat percentiles (msec): 00:27:55.934 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 19], 00:27:55.934 | 30.00th=[ 27], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 33], 00:27:55.934 | 70.00th=[ 39], 80.00th=[ 51], 90.00th=[ 138], 95.00th=[ 148], 00:27:55.934 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 176], 00:27:55.934 | 99.99th=[ 178] 00:27:55.934 bw ( KiB/s): min=108544, max=908288, per=10.47%, avg=328524.80, stdev=251848.47, samples=20 00:27:55.934 iops : min= 424, max= 3548, avg=1283.30, stdev=983.78, samples=20 00:27:55.934 lat (msec) : 2=0.02%, 4=0.05%, 10=0.44%, 20=21.91%, 50=57.61% 00:27:55.934 lat (msec) : 100=1.29%, 250=18.68% 00:27:55.934 cpu : usr=2.36%, sys=2.59%, ctx=2968, majf=0, minf=1 00:27:55.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:27:55.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.934 issued rwts: total=0,12896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.934 job5: (groupid=0, jobs=1): err= 0: pid=1811516: Thu Dec 5 13:58:54 2024 00:27:55.934 write: IOPS=1475, BW=369MiB/s (387MB/s)(3700MiB/10026msec); 0 zone resets 00:27:55.934 slat (usec): min=12, max=103363, avg=608.78, stdev=2420.45 00:27:55.934 clat (usec): min=301, max=278777, avg=42741.22, stdev=44324.05 00:27:55.934 lat (usec): min=330, max=278820, avg=43350.00, stdev=44905.44 00:27:55.934 clat percentiles (usec): 00:27:55.935 | 1.00th=[ 1762], 5.00th=[ 13698], 10.00th=[ 14877], 20.00th=[ 15664], 00:27:55.935 | 30.00th=[ 16581], 40.00th=[ 17695], 50.00th=[ 25035], 60.00th=[ 28705], 00:27:55.935 | 70.00th=[ 32900], 80.00th=[ 52167], 90.00th=[125305], 95.00th=[141558], 00:27:55.935 | 99.00th=[170918], 99.50th=[187696], 99.90th=[227541], 99.95th=[242222], 00:27:55.935 | 99.99th=[252707] 00:27:55.935 bw ( KiB/s): min=106496, max=1008640, per=12.03%, avg=377216.00, stdev=292173.19, samples=20 00:27:55.935 iops : min= 416, max= 3940, avg=1473.50, stdev=1141.30, samples=20 00:27:55.935 lat (usec) : 500=0.18%, 750=0.11%, 1000=0.11% 00:27:55.935 lat (msec) : 2=0.70%, 4=0.74%, 10=1.05%, 20=43.15%, 50=33.69% 00:27:55.935 lat (msec) : 100=2.97%, 250=17.25%, 500=0.03% 00:27:55.935 cpu : usr=2.51%, sys=3.48%, ctx=4010, majf=0, minf=1 00:27:55.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:55.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.935 issued rwts: total=0,14798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.935 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.935 job6: (groupid=0, jobs=1): err= 0: pid=1811520: Thu Dec 5 13:58:54 2024 00:27:55.935 write: IOPS=1939, BW=485MiB/s (508MB/s)(4883MiB/10070msec); 0 zone resets 00:27:55.935 slat (usec): min=11, max=93906, avg=471.51, stdev=1842.16 00:27:55.935 clat (usec): min=215, max=223525, avg=32515.93, stdev=37303.34 00:27:55.935 lat (usec): min=242, max=223583, avg=32987.45, stdev=37887.28 00:27:55.935 clat percentiles (usec): 00:27:55.935 | 1.00th=[ 1401], 5.00th=[ 4146], 10.00th=[ 7373], 20.00th=[ 12387], 00:27:55.935 | 30.00th=[ 15401], 40.00th=[ 16188], 50.00th=[ 16909], 60.00th=[ 19268], 00:27:55.935 | 70.00th=[ 26608], 80.00th=[ 35390], 90.00th=[117965], 95.00th=[126354], 00:27:55.935 | 99.00th=[137364], 99.50th=[141558], 99.90th=[170918], 99.95th=[181404], 00:27:55.935 | 99.99th=[221250] 00:27:55.935 bw ( KiB/s): min=120832, max=1261056, per=15.89%, avg=498355.20, stdev=418363.00, samples=20 00:27:55.935 iops : min= 472, max= 4926, avg=1946.70, stdev=1634.23, samples=20 00:27:55.935 lat (usec) : 250=0.01%, 500=0.04%, 750=0.16%, 1000=0.28% 00:27:55.935 lat (msec) : 2=1.53%, 4=2.85%, 10=10.30%, 20=46.07%, 50=23.80% 00:27:55.935 lat (msec) : 100=2.35%, 250=12.61% 00:27:55.935 cpu : usr=3.08%, sys=4.64%, ctx=5445, majf=0, minf=1 00:27:55.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:55.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.935 issued rwts: total=0,19530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.935 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.935 job7: (groupid=0, jobs=1): err= 0: pid=1811527: Thu Dec 5 13:58:54 2024 00:27:55.935 write: IOPS=1439, BW=360MiB/s (377MB/s)(3612MiB/10040msec); 0 zone resets 00:27:55.935 slat (usec): min=11, max=90128, avg=553.73, stdev=2582.92 00:27:55.935 clat (usec): min=642, max=236958, avg=43902.08, stdev=42872.45 00:27:55.935 lat (usec): min=797, max=238321, avg=44455.80, stdev=43420.41 00:27:55.935 clat percentiles (usec): 00:27:55.935 | 1.00th=[ 1909], 5.00th=[ 7832], 10.00th=[ 14222], 20.00th=[ 15664], 00:27:55.935 | 30.00th=[ 19792], 40.00th=[ 25822], 50.00th=[ 30540], 60.00th=[ 33817], 00:27:55.935 | 70.00th=[ 41681], 80.00th=[ 47449], 90.00th=[133694], 95.00th=[145753], 00:27:55.935 | 99.00th=[175113], 99.50th=[193987], 99.90th=[221250], 99.95th=[227541], 00:27:55.935 | 99.99th=[235930] 00:27:55.935 bw ( KiB/s): min=101888, max=852480, per=11.74%, avg=368281.60, stdev=262470.12, samples=20 00:27:55.935 iops : min= 398, max= 3330, avg=1438.60, stdev=1025.27, samples=20 00:27:55.935 lat (usec) : 750=0.02%, 1000=0.10% 00:27:55.935 lat (msec) : 2=1.07%, 4=1.38%, 10=4.03%, 20=23.63%, 50=52.61% 00:27:55.935 lat (msec) : 100=3.61%, 250=13.56% 00:27:55.935 cpu : usr=2.39%, sys=3.80%, ctx=4649, majf=0, minf=1 00:27:55.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:55.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.935 issued rwts: total=0,14449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.935 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.935 job8: (groupid=0, jobs=1): err= 0: pid=1811548: Thu Dec 5 13:58:54 2024 00:27:55.935 write: IOPS=816, BW=204MiB/s (214MB/s)(2056MiB/10072msec); 0 zone resets 00:27:55.935 slat (usec): min=20, max=121766, avg=979.08, stdev=4214.37 00:27:55.935 clat (usec): min=479, max=279616, avg=77390.29, stdev=62701.86 00:27:55.935 lat (usec): min=556, max=280216, avg=78369.37, stdev=63550.62 00:27:55.935 clat percentiles (usec): 00:27:55.935 | 1.00th=[ 865], 5.00th=[ 1926], 10.00th=[ 3228], 20.00th=[ 6587], 00:27:55.935 | 30.00th=[ 12780], 40.00th=[ 28705], 50.00th=[100140], 60.00th=[119014], 00:27:55.935 | 70.00th=[128451], 80.00th=[137364], 90.00th=[149947], 95.00th=[158335], 00:27:55.935 | 99.00th=[202376], 99.50th=[221250], 99.90th=[261096], 99.95th=[274727], 00:27:55.935 | 99.99th=[278922] 00:27:55.935 bw ( KiB/s): min=111104, max=433152, per=6.66%, avg=208870.40, stdev=100317.69, samples=20 00:27:55.935 iops : min= 434, max= 1692, avg=815.90, stdev=391.87, samples=20 00:27:55.935 lat (usec) : 500=0.01%, 750=0.49%, 1000=0.95% 00:27:55.935 lat (msec) : 2=3.89%, 4=7.57%, 10=13.27%, 20=7.35%, 50=10.87% 00:27:55.935 lat (msec) : 100=5.64%, 250=49.78%, 500=0.18% 00:27:55.935 cpu : usr=1.69%, sys=2.42%, ctx=3373, majf=0, minf=1 00:27:55.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:55.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.935 issued rwts: total=0,8222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.935 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.935 job9: (groupid=0, jobs=1): err= 0: pid=1811557: Thu Dec 5 13:58:54 2024 00:27:55.935 write: IOPS=1896, BW=474MiB/s (497MB/s)(4761MiB/10040msec); 0 zone resets 00:27:55.935 slat (usec): min=10, max=82172, avg=446.60, stdev=1915.89 00:27:55.935 clat (usec): min=226, max=209860, avg=33284.46, stdev=37131.60 00:27:55.935 lat (usec): min=268, max=209908, avg=33731.06, stdev=37704.11 00:27:55.935 clat percentiles (usec): 00:27:55.935 | 1.00th=[ 1172], 5.00th=[ 2802], 10.00th=[ 5014], 20.00th=[ 9372], 00:27:55.935 | 30.00th=[ 12911], 40.00th=[ 15008], 50.00th=[ 27657], 60.00th=[ 30278], 00:27:55.935 | 70.00th=[ 32375], 80.00th=[ 42206], 90.00th=[ 55313], 95.00th=[137364], 00:27:55.935 | 99.00th=[154141], 99.50th=[164627], 99.90th=[183501], 99.95th=[187696], 00:27:55.935 | 99.99th=[208667] 00:27:55.935 bw ( KiB/s): min=109056, max=1575936, per=15.49%, avg=485888.00, stdev=472321.07, samples=20 00:27:55.935 iops : min= 426, max= 6156, avg=1898.00, stdev=1845.00, samples=20 00:27:55.935 lat (usec) : 250=0.01%, 500=0.06%, 750=0.19%, 1000=0.38% 00:27:55.935 lat (msec) : 2=2.53%, 4=4.52%, 10=14.15%, 20=24.91%, 50=42.33% 00:27:55.935 lat (msec) : 100=1.49%, 250=9.43% 00:27:55.935 cpu : usr=2.93%, sys=5.15%, ctx=6134, majf=0, minf=1 00:27:55.936 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:55.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.936 issued rwts: total=0,19043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.936 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.936 job10: (groupid=0, jobs=1): err= 0: pid=1811563: Thu Dec 5 13:58:54 2024 00:27:55.936 write: IOPS=539, BW=135MiB/s (141MB/s)(1358MiB/10071msec); 0 zone resets 00:27:55.936 slat (usec): min=18, max=62081, avg=1696.64, stdev=3768.78 00:27:55.936 clat (msec): min=2, max=210, avg=116.92, stdev=37.59 00:27:55.936 lat (msec): min=2, max=210, avg=118.62, stdev=38.27 00:27:55.936 clat percentiles (msec): 00:27:55.936 | 1.00th=[ 12], 5.00th=[ 34], 10.00th=[ 44], 20.00th=[ 111], 00:27:55.936 | 30.00th=[ 118], 40.00th=[ 123], 50.00th=[ 128], 60.00th=[ 132], 00:27:55.936 | 70.00th=[ 138], 80.00th=[ 142], 90.00th=[ 150], 95.00th=[ 155], 00:27:55.936 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 207], 99.95th=[ 211], 00:27:55.936 | 99.99th=[ 211] 00:27:55.936 bw ( KiB/s): min=107008, max=343040, per=4.38%, avg=137446.40, stdev=51788.51, samples=20 00:27:55.936 iops : min= 418, max= 1340, avg=536.90, stdev=202.30, samples=20 00:27:55.936 lat (msec) : 4=0.46%, 10=0.35%, 20=1.88%, 50=9.24%, 100=6.08% 00:27:55.936 lat (msec) : 250=82.00% 00:27:55.936 cpu : usr=1.38%, sys=1.57%, ctx=1705, majf=0, minf=1 00:27:55.936 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:55.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:55.936 issued rwts: total=0,5432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.936 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:55.936 00:27:55.936 Run status group 0 (all jobs): 00:27:55.936 WRITE: bw=3063MiB/s (3212MB/s), 124MiB/s-485MiB/s (130MB/s-508MB/s), io=30.1GiB (32.3GB), run=10024-10072msec 00:27:55.936 00:27:55.936 Disk stats (read/write): 00:27:55.936 nvme0n1: ios=49/17989, merge=0/0, ticks=3/1225947, in_queue=1225950, util=97.64% 00:27:55.936 nvme10n1: ios=0/12181, merge=0/0, ticks=0/1224238, in_queue=1224238, util=97.82% 00:27:55.936 nvme1n1: ios=0/17341, merge=0/0, ticks=0/1224188, in_queue=1224188, util=98.03% 00:27:55.936 nvme2n1: ios=0/9861, merge=0/0, ticks=0/1218744, in_queue=1218744, util=98.14% 00:27:55.936 nvme3n1: ios=0/25534, merge=0/0, ticks=0/1234439, in_queue=1234439, util=98.22% 00:27:55.936 nvme4n1: ios=0/29352, merge=0/0, ticks=0/1231962, in_queue=1231962, util=98.41% 00:27:55.936 nvme5n1: ios=0/38908, merge=0/0, ticks=0/1225487, in_queue=1225487, util=98.52% 00:27:55.936 nvme6n1: ios=0/28715, merge=0/0, ticks=0/1234151, in_queue=1234151, util=98.60% 00:27:55.936 nvme7n1: ios=0/16288, merge=0/0, ticks=0/1231813, in_queue=1231813, util=98.86% 00:27:55.936 nvme8n1: ios=0/37892, merge=0/0, ticks=0/1232466, in_queue=1232466, util=99.00% 00:27:55.936 nvme9n1: ios=0/10708, merge=0/0, ticks=0/1224052, in_queue=1224052, util=99.08% 00:27:55.936 13:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:55.936 13:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:55.936 13:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:55.936 13:58:54 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:56.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:56.196 13:58:55 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:57.134 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:57.134 13:58:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:58.073 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:58.073 13:58:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:59.452 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:59.452 13:58:58 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:00.020 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:00.020 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:00.020 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:00.020 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:00.020 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:28:00.020 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:00.020 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:28:00.279 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:00.279 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:00.279 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.279 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:00.279 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.279 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:00.279 13:58:59 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:01.215 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:01.215 13:59:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:02.148 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:02.148 13:59:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:03.082 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.082 13:59:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:04.019 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.019 13:59:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:04.958 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.958 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:05.217 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.217 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:05.217 13:59:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:06.167 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:06.167 rmmod nvme_rdma 00:28:06.167 rmmod nvme_fabrics 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1803181 ']' 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1803181 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1803181 ']' 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1803181 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803181 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803181' 00:28:06.167 killing process with pid 1803181 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1803181 00:28:06.167 13:59:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1803181 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:06.736 00:28:06.736 real 1m13.540s 00:28:06.736 user 4m44.994s 00:28:06.736 sys 0m16.433s 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:06.736 ************************************ 00:28:06.736 END TEST nvmf_multiconnection 00:28:06.736 ************************************ 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:06.736 ************************************ 00:28:06.736 START TEST nvmf_initiator_timeout 00:28:06.736 ************************************ 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:28:06.736 * Looking for test storage... 00:28:06.736 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.736 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:06.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.996 --rc genhtml_branch_coverage=1 00:28:06.996 --rc genhtml_function_coverage=1 00:28:06.996 --rc genhtml_legend=1 00:28:06.996 --rc geninfo_all_blocks=1 00:28:06.996 --rc geninfo_unexecuted_blocks=1 00:28:06.996 00:28:06.996 ' 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:06.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.996 --rc genhtml_branch_coverage=1 00:28:06.996 --rc genhtml_function_coverage=1 00:28:06.996 --rc genhtml_legend=1 00:28:06.996 --rc geninfo_all_blocks=1 00:28:06.996 --rc geninfo_unexecuted_blocks=1 00:28:06.996 00:28:06.996 ' 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:06.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.996 --rc genhtml_branch_coverage=1 00:28:06.996 --rc genhtml_function_coverage=1 00:28:06.996 --rc genhtml_legend=1 00:28:06.996 --rc geninfo_all_blocks=1 00:28:06.996 --rc geninfo_unexecuted_blocks=1 00:28:06.996 00:28:06.996 ' 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:06.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.996 --rc genhtml_branch_coverage=1 00:28:06.996 --rc genhtml_function_coverage=1 00:28:06.996 --rc genhtml_legend=1 00:28:06.996 --rc geninfo_all_blocks=1 00:28:06.996 --rc geninfo_unexecuted_blocks=1 00:28:06.996 00:28:06.996 ' 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.996 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.997 13:59:06 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.573 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:28:13.574 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:28:13.574 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:28:13.574 Found net devices under 0000:18:00.0: mlx_0_0 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:28:13.574 Found net devices under 0000:18:00.1: mlx_0_1 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:13.574 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:13.574 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:28:13.574 altname enp24s0f0np0 00:28:13.574 altname ens785f0np0 00:28:13.574 inet 192.168.100.8/24 scope global mlx_0_0 00:28:13.574 valid_lft forever preferred_lft forever 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:13.574 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:13.574 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:28:13.574 altname enp24s0f1np1 00:28:13.574 altname ens785f1np1 00:28:13.574 inet 192.168.100.9/24 scope global mlx_0_1 00:28:13.574 valid_lft forever preferred_lft forever 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:13.574 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:13.575 192.168.100.9' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:13.575 192.168.100.9' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:13.575 192.168.100.9' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1818429 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1818429 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1818429 ']' 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 [2024-12-05 13:59:12.742600] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:28:13.575 [2024-12-05 13:59:12.742645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.575 [2024-12-05 13:59:12.817153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.575 [2024-12-05 13:59:12.838716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.575 [2024-12-05 13:59:12.838753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.575 [2024-12-05 13:59:12.838760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.575 [2024-12-05 13:59:12.838765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.575 [2024-12-05 13:59:12.838770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.575 [2024-12-05 13:59:12.840149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.575 [2024-12-05 13:59:12.840240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.575 [2024-12-05 13:59:12.840323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.575 [2024-12-05 13:59:12.840324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 Malloc0 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.575 13:59:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 Delay0 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 [2024-12-05 13:59:13.036648] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24a79b0/0x2424a80) succeed. 00:28:13.575 [2024-12-05 13:59:13.045045] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24a8030/0x2466120) succeed. 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.575 [2024-12-05 13:59:13.177684] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:13.575 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.576 13:59:13 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:28:14.515 13:59:14 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:14.515 13:59:14 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:28:14.516 13:59:14 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:14.516 13:59:14 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:14.516 13:59:14 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1819129 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:16.532 13:59:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:16.532 [global] 00:28:16.532 thread=1 00:28:16.532 invalidate=1 00:28:16.532 rw=write 00:28:16.532 time_based=1 00:28:16.532 runtime=60 00:28:16.532 ioengine=libaio 00:28:16.532 direct=1 00:28:16.532 bs=4096 00:28:16.532 iodepth=1 00:28:16.532 norandommap=0 00:28:16.532 numjobs=1 00:28:16.532 00:28:16.532 verify_dump=1 00:28:16.532 verify_backlog=512 00:28:16.532 verify_state_save=0 00:28:16.532 do_verify=1 00:28:16.532 verify=crc32c-intel 00:28:16.532 [job0] 00:28:16.532 filename=/dev/nvme0n1 00:28:16.532 Could not set queue depth (nvme0n1) 00:28:16.812 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:16.812 fio-3.35 00:28:16.812 Starting 1 thread 00:28:19.340 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:19.340 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.340 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:19.598 true 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:19.598 true 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:19.598 true 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:19.598 true 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.598 13:59:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.886 true 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.886 true 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.886 true 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.886 true 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:22.886 13:59:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1819129 00:29:19.141 00:29:19.141 job0: (groupid=0, jobs=1): err= 0: pid=1819381: Thu Dec 5 14:00:16 2024 00:29:19.141 read: IOPS=1384, BW=5537KiB/s (5670kB/s)(324MiB/60000msec) 00:29:19.141 slat (usec): min=3, max=6975, avg= 7.01, stdev=24.21 00:29:19.141 clat (usec): min=76, max=42508k, avg=608.25, stdev=147492.59 00:29:19.141 lat (usec): min=84, max=42508k, avg=615.26, stdev=147492.59 00:29:19.141 clat percentiles (usec): 00:29:19.141 | 1.00th=[ 87], 5.00th=[ 89], 10.00th=[ 90], 20.00th=[ 92], 00:29:19.141 | 30.00th=[ 94], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 98], 00:29:19.141 | 70.00th=[ 99], 80.00th=[ 101], 90.00th=[ 103], 95.00th=[ 105], 00:29:19.141 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 116], 99.95th=[ 120], 00:29:19.141 | 99.99th=[ 188] 00:29:19.141 write: IOPS=1390, BW=5564KiB/s (5697kB/s)(326MiB/60000msec); 0 zone resets 00:29:19.141 slat (usec): min=3, max=882, avg= 9.19, stdev= 3.79 00:29:19.141 clat (usec): min=24, max=348, avg=93.60, stdev= 5.74 00:29:19.141 lat (usec): min=81, max=907, avg=102.79, stdev= 6.93 00:29:19.141 clat percentiles (usec): 00:29:19.141 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 89], 00:29:19.141 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 95], 00:29:19.141 | 70.00th=[ 96], 80.00th=[ 98], 90.00th=[ 101], 95.00th=[ 103], 00:29:19.141 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 119], 99.95th=[ 126], 00:29:19.141 | 99.99th=[ 206] 00:29:19.141 bw ( KiB/s): min= 4096, max=20480, per=100.00%, avg=18598.17, stdev=2789.41, samples=35 00:29:19.141 iops : min= 1024, max= 5120, avg=4649.54, stdev=697.35, samples=35 00:29:19.141 lat (usec) : 50=0.01%, 100=82.02%, 250=17.98%, 500=0.01% 00:29:19.141 lat (msec) : >=2000=0.01% 00:29:19.141 cpu : usr=1.26%, sys=2.26%, ctx=166524, majf=0, minf=107 00:29:19.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:19.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.141 issued rwts: total=83060,83456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:19.141 00:29:19.141 Run status group 0 (all jobs): 00:29:19.141 READ: bw=5537KiB/s (5670kB/s), 5537KiB/s-5537KiB/s (5670kB/s-5670kB/s), io=324MiB (340MB), run=60000-60000msec 00:29:19.141 WRITE: bw=5564KiB/s (5697kB/s), 5564KiB/s-5564KiB/s (5697kB/s-5697kB/s), io=326MiB (342MB), run=60000-60000msec 00:29:19.141 00:29:19.141 Disk stats (read/write): 00:29:19.141 nvme0n1: ios=83024/82944, merge=0/0, ticks=7629/7375, in_queue=15004, util=99.59% 00:29:19.141 14:00:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:19.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:19.141 nvmf hotplug test: fio successful as expected 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:19.141 rmmod nvme_rdma 00:29:19.141 rmmod nvme_fabrics 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1818429 ']' 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1818429 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1818429 ']' 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1818429 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1818429 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1818429' 00:29:19.141 killing process with pid 1818429 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1818429 00:29:19.141 14:00:17 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1818429 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:19.141 00:29:19.141 real 1m11.574s 00:29:19.141 user 4m29.223s 00:29:19.141 sys 0m7.498s 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.141 ************************************ 00:29:19.141 END TEST nvmf_initiator_timeout 00:29:19.141 ************************************ 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:19.141 ************************************ 00:29:19.141 START TEST nvmf_srq_overwhelm 00:29:19.141 ************************************ 00:29:19.141 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:29:19.141 * Looking for test storage... 00:29:19.142 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.142 --rc genhtml_branch_coverage=1 00:29:19.142 --rc genhtml_function_coverage=1 00:29:19.142 --rc genhtml_legend=1 00:29:19.142 --rc geninfo_all_blocks=1 00:29:19.142 --rc geninfo_unexecuted_blocks=1 00:29:19.142 00:29:19.142 ' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.142 --rc genhtml_branch_coverage=1 00:29:19.142 --rc genhtml_function_coverage=1 00:29:19.142 --rc genhtml_legend=1 00:29:19.142 --rc geninfo_all_blocks=1 00:29:19.142 --rc geninfo_unexecuted_blocks=1 00:29:19.142 00:29:19.142 ' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.142 --rc genhtml_branch_coverage=1 00:29:19.142 --rc genhtml_function_coverage=1 00:29:19.142 --rc genhtml_legend=1 00:29:19.142 --rc geninfo_all_blocks=1 00:29:19.142 --rc geninfo_unexecuted_blocks=1 00:29:19.142 00:29:19.142 ' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.142 --rc genhtml_branch_coverage=1 00:29:19.142 --rc genhtml_function_coverage=1 00:29:19.142 --rc genhtml_legend=1 00:29:19.142 --rc geninfo_all_blocks=1 00:29:19.142 --rc geninfo_unexecuted_blocks=1 00:29:19.142 00:29:19.142 ' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.142 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.143 14:00:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:29:24.423 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:24.423 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:29:24.424 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:29:24.424 Found net devices under 0000:18:00.0: mlx_0_0 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:29:24.424 Found net devices under 0000:18:00.1: mlx_0_1 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:24.424 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:24.424 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:29:24.424 altname enp24s0f0np0 00:29:24.424 altname ens785f0np0 00:29:24.424 inet 192.168.100.8/24 scope global mlx_0_0 00:29:24.424 valid_lft forever preferred_lft forever 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:24.424 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:24.424 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:29:24.424 altname enp24s0f1np1 00:29:24.424 altname ens785f1np1 00:29:24.424 inet 192.168.100.9/24 scope global mlx_0_1 00:29:24.424 valid_lft forever preferred_lft forever 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:24.424 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:24.425 192.168.100.9' 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:24.425 192.168.100.9' 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:24.425 192.168.100.9' 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:29:24.425 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=1833982 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 1833982 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 1833982 ']' 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.685 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.685 [2024-12-05 14:00:24.357168] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:29:24.685 [2024-12-05 14:00:24.357217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.685 [2024-12-05 14:00:24.431134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.685 [2024-12-05 14:00:24.453921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.685 [2024-12-05 14:00:24.453962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.685 [2024-12-05 14:00:24.453968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.685 [2024-12-05 14:00:24.453974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.685 [2024-12-05 14:00:24.453979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.685 [2024-12-05 14:00:24.455409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.685 [2024-12-05 14:00:24.455467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.685 [2024-12-05 14:00:24.455577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.685 [2024-12-05 14:00:24.455578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.945 [2024-12-05 14:00:24.615945] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a83f30/0x1a88420) succeed. 00:29:24.945 [2024-12-05 14:00:24.624205] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a855c0/0x1ac9ac0) succeed. 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.945 Malloc0 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.945 [2024-12-05 14:00:24.721416] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.945 14:00:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.881 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:26.140 Malloc1 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.140 14:00:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:27.079 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:27.080 Malloc2 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.080 14:00:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:28.018 Malloc3 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.018 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:28.019 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.019 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:29:28.019 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.019 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:28.019 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.019 14:00:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:29:28.955 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:29:28.955 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:29:28.955 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:28.955 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:29.215 Malloc4 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.215 14:00:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:30.153 Malloc5 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.153 14:00:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:29:31.091 14:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:29:31.091 14:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:29:31.091 14:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:31.091 14:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:29:31.091 14:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:31.091 14:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:29:31.091 14:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:29:31.091 14:00:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:29:31.091 [global] 00:29:31.091 thread=1 00:29:31.091 invalidate=1 00:29:31.091 rw=read 00:29:31.091 time_based=1 00:29:31.091 runtime=10 00:29:31.091 ioengine=libaio 00:29:31.091 direct=1 00:29:31.091 bs=1048576 00:29:31.091 iodepth=128 00:29:31.091 norandommap=1 00:29:31.091 numjobs=13 00:29:31.091 00:29:31.091 [job0] 00:29:31.091 filename=/dev/nvme0n1 00:29:31.091 [job1] 00:29:31.091 filename=/dev/nvme1n1 00:29:31.091 [job2] 00:29:31.091 filename=/dev/nvme2n1 00:29:31.091 [job3] 00:29:31.091 filename=/dev/nvme3n1 00:29:31.091 [job4] 00:29:31.091 filename=/dev/nvme4n1 00:29:31.091 [job5] 00:29:31.091 filename=/dev/nvme5n1 00:29:31.352 Could not set queue depth (nvme0n1) 00:29:31.352 Could not set queue depth (nvme1n1) 00:29:31.352 Could not set queue depth (nvme2n1) 00:29:31.352 Could not set queue depth (nvme3n1) 00:29:31.352 Could not set queue depth (nvme4n1) 00:29:31.352 Could not set queue depth (nvme5n1) 00:29:31.610 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:31.610 ... 00:29:31.610 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:31.610 ... 00:29:31.610 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:31.610 ... 00:29:31.610 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:31.610 ... 00:29:31.610 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:31.610 ... 00:29:31.610 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:31.610 ... 00:29:31.610 fio-3.35 00:29:31.610 Starting 78 threads 00:29:46.499 00:29:46.499 job0: (groupid=0, jobs=1): err= 0: pid=1835415: Thu Dec 5 14:00:45 2024 00:29:46.499 read: IOPS=26, BW=26.3MiB/s (27.6MB/s)(285MiB/10826msec) 00:29:46.499 slat (usec): min=35, max=2108.5k, avg=37821.39, stdev=233315.46 00:29:46.499 clat (msec): min=45, max=8918, avg=4529.73, stdev=3087.96 00:29:46.499 lat (msec): min=1274, max=8921, avg=4567.55, stdev=3082.60 00:29:46.499 clat percentiles (msec): 00:29:46.499 | 1.00th=[ 1267], 5.00th=[ 1351], 10.00th=[ 1485], 20.00th=[ 1569], 00:29:46.499 | 30.00th=[ 1703], 40.00th=[ 1787], 50.00th=[ 2869], 60.00th=[ 6409], 00:29:46.499 | 70.00th=[ 8087], 80.00th=[ 8288], 90.00th=[ 8490], 95.00th=[ 8792], 00:29:46.499 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:29:46.499 | 99.99th=[ 8926] 00:29:46.499 bw ( KiB/s): min=20480, max=94208, per=1.79%, avg=53589.33, stdev=34450.66, samples=6 00:29:46.499 iops : min= 20, max= 92, avg=52.33, stdev=33.64, samples=6 00:29:46.499 lat (msec) : 50=0.35%, 2000=43.51%, >=2000=56.14% 00:29:46.499 cpu : usr=0.01%, sys=0.68%, ctx=482, majf=0, minf=32769 00:29:46.499 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=77.9% 00:29:46.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.499 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:29:46.499 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.499 job0: (groupid=0, jobs=1): err= 0: pid=1835416: Thu Dec 5 14:00:45 2024 00:29:46.499 read: IOPS=28, BW=28.1MiB/s (29.5MB/s)(396MiB/14073msec) 00:29:46.499 slat (usec): min=73, max=2165.1k, avg=30132.09, stdev=211499.89 00:29:46.499 clat (msec): min=885, max=11706, avg=4303.57, stdev=4664.85 00:29:46.499 lat (msec): min=887, max=11709, avg=4333.70, stdev=4674.24 00:29:46.499 clat percentiles (msec): 00:29:46.499 | 1.00th=[ 902], 5.00th=[ 927], 10.00th=[ 944], 20.00th=[ 978], 00:29:46.499 | 30.00th=[ 1011], 40.00th=[ 1045], 50.00th=[ 1083], 60.00th=[ 1284], 00:29:46.499 | 70.00th=[ 9597], 80.00th=[11073], 90.00th=[11476], 95.00th=[11476], 00:29:46.499 | 99.00th=[11610], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:29:46.499 | 99.99th=[11745] 00:29:46.499 bw ( KiB/s): min= 2052, max=157696, per=2.05%, avg=61205.56, stdev=62934.67, samples=9 00:29:46.499 iops : min= 2, max= 154, avg=59.56, stdev=61.68, samples=9 00:29:46.499 lat (msec) : 1000=26.77%, 2000=38.89%, >=2000=34.34% 00:29:46.499 cpu : usr=0.01%, sys=0.70%, ctx=667, majf=0, minf=32769 00:29:46.499 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:29:46.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.499 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:46.499 issued rwts: total=396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.499 job0: (groupid=0, jobs=1): err= 0: pid=1835417: Thu Dec 5 14:00:45 2024 00:29:46.499 read: IOPS=33, BW=33.0MiB/s (34.6MB/s)(393MiB/11905msec) 00:29:46.499 slat (usec): min=30, max=2117.9k, avg=25476.89, stdev=115160.51 00:29:46.499 clat (msec): min=1737, max=5248, avg=3252.65, stdev=1115.14 00:29:46.499 lat (msec): min=1813, max=5289, avg=3278.13, stdev=1118.07 00:29:46.499 clat percentiles (msec): 00:29:46.499 | 1.00th=[ 1871], 5.00th=[ 1989], 10.00th=[ 2039], 20.00th=[ 2072], 00:29:46.499 | 30.00th=[ 2165], 40.00th=[ 2400], 50.00th=[ 3171], 60.00th=[ 3742], 00:29:46.499 | 70.00th=[ 4279], 80.00th=[ 4396], 90.00th=[ 4732], 95.00th=[ 5067], 00:29:46.499 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:29:46.499 | 99.99th=[ 5269] 00:29:46.499 bw ( KiB/s): min= 4096, max=86016, per=1.74%, avg=51984.20, stdev=26884.02, samples=10 00:29:46.499 iops : min= 4, max= 84, avg=50.70, stdev=26.23, samples=10 00:29:46.499 lat (msec) : 2000=5.60%, >=2000=94.40% 00:29:46.499 cpu : usr=0.03%, sys=0.90%, ctx=1081, majf=0, minf=32769 00:29:46.499 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:29:46.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.499 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:46.499 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.499 job0: (groupid=0, jobs=1): err= 0: pid=1835418: Thu Dec 5 14:00:45 2024 00:29:46.499 read: IOPS=13, BW=13.6MiB/s (14.2MB/s)(190MiB/13995msec) 00:29:46.499 slat (usec): min=421, max=2167.0k, avg=62447.59, stdev=289447.89 00:29:46.499 clat (msec): min=2128, max=12875, avg=8493.45, stdev=3980.12 00:29:46.499 lat (msec): min=2558, max=12953, avg=8555.90, stdev=3959.67 00:29:46.499 clat percentiles (msec): 00:29:46.499 | 1.00th=[ 2534], 5.00th=[ 2567], 10.00th=[ 2601], 20.00th=[ 2668], 00:29:46.499 | 30.00th=[ 4799], 40.00th=[10671], 50.00th=[10939], 60.00th=[11208], 00:29:46.499 | 70.00th=[11342], 80.00th=[11745], 90.00th=[12147], 95.00th=[12550], 00:29:46.499 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:29:46.499 | 99.99th=[12818] 00:29:46.499 bw ( KiB/s): min= 2052, max=51200, per=0.61%, avg=18186.29, stdev=22631.38, samples=7 00:29:46.499 iops : min= 2, max= 50, avg=17.71, stdev=22.13, samples=7 00:29:46.499 lat (msec) : >=2000=100.00% 00:29:46.499 cpu : usr=0.00%, sys=0.68%, ctx=684, majf=0, minf=32769 00:29:46.499 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=66.8% 00:29:46.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.499 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:29:46.499 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.499 job0: (groupid=0, jobs=1): err= 0: pid=1835419: Thu Dec 5 14:00:45 2024 00:29:46.499 read: IOPS=15, BW=15.0MiB/s (15.8MB/s)(212MiB/14090msec) 00:29:46.499 slat (usec): min=1029, max=2183.7k, avg=56417.60, stdev=287505.99 00:29:46.499 clat (msec): min=1971, max=12606, avg=7785.25, stdev=4606.21 00:29:46.499 lat (msec): min=1981, max=12612, avg=7841.67, stdev=4593.43 00:29:46.499 clat percentiles (msec): 00:29:46.499 | 1.00th=[ 1972], 5.00th=[ 1989], 10.00th=[ 2022], 20.00th=[ 2072], 00:29:46.499 | 30.00th=[ 2140], 40.00th=[ 6342], 50.00th=[10805], 60.00th=[11073], 00:29:46.499 | 70.00th=[11610], 80.00th=[12013], 90.00th=[12416], 95.00th=[12550], 00:29:46.499 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:29:46.499 | 99.99th=[12550] 00:29:46.499 bw ( KiB/s): min= 2052, max=67584, per=0.83%, avg=24815.57, stdev=27128.82, samples=7 00:29:46.499 iops : min= 2, max= 66, avg=24.00, stdev=26.49, samples=7 00:29:46.499 lat (msec) : 2000=5.66%, >=2000=94.34% 00:29:46.499 cpu : usr=0.00%, sys=0.59%, ctx=606, majf=0, minf=32769 00:29:46.499 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.5%, 32=15.1%, >=64=70.3% 00:29:46.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.499 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:29:46.499 issued rwts: total=212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.499 job0: (groupid=0, jobs=1): err= 0: pid=1835420: Thu Dec 5 14:00:45 2024 00:29:46.499 read: IOPS=57, BW=57.1MiB/s (59.8MB/s)(578MiB/10128msec) 00:29:46.499 slat (usec): min=40, max=2094.5k, avg=17326.90, stdev=87923.88 00:29:46.499 clat (msec): min=110, max=4136, avg=2143.51, stdev=788.65 00:29:46.499 lat (msec): min=207, max=4166, avg=2160.83, stdev=785.73 00:29:46.500 clat percentiles (msec): 00:29:46.500 | 1.00th=[ 860], 5.00th=[ 1217], 10.00th=[ 1368], 20.00th=[ 1586], 00:29:46.500 | 30.00th=[ 1670], 40.00th=[ 1804], 50.00th=[ 1938], 60.00th=[ 2123], 00:29:46.500 | 70.00th=[ 2299], 80.00th=[ 2534], 90.00th=[ 3608], 95.00th=[ 3876], 00:29:46.500 | 99.00th=[ 4077], 99.50th=[ 4111], 99.90th=[ 4144], 99.95th=[ 4144], 00:29:46.500 | 99.99th=[ 4144] 00:29:46.500 bw ( KiB/s): min= 6144, max=114688, per=1.93%, avg=57584.88, stdev=25771.58, samples=16 00:29:46.500 iops : min= 6, max= 112, avg=56.12, stdev=25.16, samples=16 00:29:46.500 lat (msec) : 250=0.69%, 1000=2.08%, 2000=51.56%, >=2000=45.67% 00:29:46.500 cpu : usr=0.02%, sys=1.15%, ctx=1285, majf=0, minf=32769 00:29:46.500 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:29:46.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.500 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.500 issued rwts: total=578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.500 job0: (groupid=0, jobs=1): err= 0: pid=1835421: Thu Dec 5 14:00:45 2024 00:29:46.500 read: IOPS=39, BW=39.5MiB/s (41.4MB/s)(399MiB/10097msec) 00:29:46.500 slat (usec): min=362, max=2141.7k, avg=25064.61, stdev=109020.80 00:29:46.500 clat (msec): min=93, max=5028, avg=2819.44, stdev=957.60 00:29:46.500 lat (msec): min=96, max=5042, avg=2844.50, stdev=949.69 00:29:46.500 clat percentiles (msec): 00:29:46.500 | 1.00th=[ 102], 5.00th=[ 2123], 10.00th=[ 2198], 20.00th=[ 2333], 00:29:46.500 | 30.00th=[ 2366], 40.00th=[ 2433], 50.00th=[ 2467], 60.00th=[ 2500], 00:29:46.500 | 70.00th=[ 2937], 80.00th=[ 3675], 90.00th=[ 4396], 95.00th=[ 4866], 00:29:46.500 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:29:46.500 | 99.99th=[ 5000] 00:29:46.500 bw ( KiB/s): min= 6144, max=67584, per=1.43%, avg=42809.69, stdev=18951.87, samples=13 00:29:46.500 iops : min= 6, max= 66, avg=41.77, stdev=18.55, samples=13 00:29:46.500 lat (msec) : 100=0.75%, 250=2.01%, 500=0.25%, >=2000=96.99% 00:29:46.500 cpu : usr=0.02%, sys=0.89%, ctx=1249, majf=0, minf=32769 00:29:46.500 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:29:46.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.500 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:46.500 issued rwts: total=399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.500 job0: (groupid=0, jobs=1): err= 0: pid=1835422: Thu Dec 5 14:00:45 2024 00:29:46.500 read: IOPS=30, BW=30.1MiB/s (31.6MB/s)(305MiB/10117msec) 00:29:46.500 slat (usec): min=633, max=2129.4k, avg=32804.39, stdev=170919.02 00:29:46.500 clat (msec): min=110, max=7920, avg=3877.60, stdev=2053.53 00:29:46.500 lat (msec): min=205, max=7929, avg=3910.40, stdev=2051.68 00:29:46.500 clat percentiles (msec): 00:29:46.500 | 1.00th=[ 226], 5.00th=[ 1552], 10.00th=[ 1670], 20.00th=[ 1770], 00:29:46.500 | 30.00th=[ 1972], 40.00th=[ 3205], 50.00th=[ 3977], 60.00th=[ 4329], 00:29:46.500 | 70.00th=[ 4665], 80.00th=[ 5470], 90.00th=[ 7148], 95.00th=[ 7550], 00:29:46.500 | 99.00th=[ 7819], 99.50th=[ 7886], 99.90th=[ 7953], 99.95th=[ 7953], 00:29:46.500 | 99.99th=[ 7953] 00:29:46.500 bw ( KiB/s): min= 4096, max=77824, per=1.22%, avg=36454.40, stdev=21496.96, samples=10 00:29:46.500 iops : min= 4, max= 76, avg=35.60, stdev=20.99, samples=10 00:29:46.500 lat (msec) : 250=1.64%, 2000=31.15%, >=2000=67.21% 00:29:46.500 cpu : usr=0.03%, sys=1.04%, ctx=914, majf=0, minf=32769 00:29:46.500 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.3% 00:29:46.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.500 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:29:46.500 issued rwts: total=305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.500 job0: (groupid=0, jobs=1): err= 0: pid=1835423: Thu Dec 5 14:00:45 2024 00:29:46.500 read: IOPS=27, BW=27.3MiB/s (28.6MB/s)(295MiB/10814msec) 00:29:46.500 slat (usec): min=38, max=2148.9k, avg=36581.39, stdev=196052.48 00:29:46.500 clat (msec): min=21, max=7538, avg=2309.37, stdev=1430.73 00:29:46.500 lat (msec): min=1187, max=7555, avg=2345.95, stdev=1455.28 00:29:46.500 clat percentiles (msec): 00:29:46.500 | 1.00th=[ 1234], 5.00th=[ 1368], 10.00th=[ 1586], 20.00th=[ 1653], 00:29:46.500 | 30.00th=[ 1737], 40.00th=[ 1787], 50.00th=[ 1854], 60.00th=[ 1888], 00:29:46.500 | 70.00th=[ 2089], 80.00th=[ 2467], 90.00th=[ 2903], 95.00th=[ 7416], 00:29:46.500 | 99.00th=[ 7550], 99.50th=[ 7550], 99.90th=[ 7550], 99.95th=[ 7550], 00:29:46.500 | 99.99th=[ 7550] 00:29:46.500 bw ( KiB/s): min=19961, max=96256, per=1.90%, avg=56916.17, stdev=30705.32, samples=6 00:29:46.500 iops : min= 19, max= 94, avg=55.50, stdev=30.10, samples=6 00:29:46.500 lat (msec) : 50=0.34%, 2000=64.75%, >=2000=34.92% 00:29:46.500 cpu : usr=0.03%, sys=0.75%, ctx=601, majf=0, minf=32769 00:29:46.500 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.6% 00:29:46.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.500 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:29:46.500 issued rwts: total=295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.500 job0: (groupid=0, jobs=1): err= 0: pid=1835424: Thu Dec 5 14:00:45 2024 00:29:46.500 read: IOPS=3, BW=3280KiB/s (3358kB/s)(45.0MiB/14050msec) 00:29:46.500 slat (usec): min=650, max=2102.5k, avg=264965.30, stdev=668233.81 00:29:46.500 clat (msec): min=2125, max=14036, avg=9002.77, stdev=3643.23 00:29:46.500 lat (msec): min=4185, max=14049, avg=9267.74, stdev=3564.43 00:29:46.500 clat percentiles (msec): 00:29:46.500 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4245], 00:29:46.500 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[10671], 00:29:46.500 | 70.00th=[10671], 80.00th=[13758], 90.00th=[14026], 95.00th=[14026], 00:29:46.500 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:29:46.500 | 99.99th=[14026] 00:29:46.500 lat (msec) : >=2000=100.00% 00:29:46.500 cpu : usr=0.00%, sys=0.21%, ctx=87, majf=0, minf=11521 00:29:46.500 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:29:46.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.500 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:46.500 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.500 job0: (groupid=0, jobs=1): err= 0: pid=1835425: Thu Dec 5 14:00:45 2024 00:29:46.500 read: IOPS=40, BW=40.8MiB/s (42.8MB/s)(411MiB/10063msec) 00:29:46.500 slat (usec): min=554, max=2049.1k, avg=24353.27, stdev=103117.35 00:29:46.500 clat (msec): min=51, max=5025, avg=2666.97, stdev=931.40 00:29:46.500 lat (msec): min=126, max=5052, avg=2691.32, stdev=925.61 00:29:46.500 clat percentiles (msec): 00:29:46.500 | 1.00th=[ 163], 5.00th=[ 1838], 10.00th=[ 1938], 20.00th=[ 2022], 00:29:46.500 | 30.00th=[ 2089], 40.00th=[ 2165], 50.00th=[ 2400], 60.00th=[ 2836], 00:29:46.500 | 70.00th=[ 2937], 80.00th=[ 3037], 90.00th=[ 4463], 95.00th=[ 4866], 00:29:46.500 | 99.00th=[ 4933], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:29:46.500 | 99.99th=[ 5000] 00:29:46.500 bw ( KiB/s): min=12288, max=71680, per=1.57%, avg=46933.33, stdev=23666.40, samples=12 00:29:46.500 iops : min= 12, max= 70, avg=45.83, stdev=23.11, samples=12 00:29:46.500 lat (msec) : 100=0.24%, 250=0.97%, 500=0.97%, 2000=15.09%, >=2000=82.73% 00:29:46.500 cpu : usr=0.02%, sys=0.95%, ctx=1258, majf=0, minf=32769 00:29:46.500 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.8%, >=64=84.7% 00:29:46.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.500 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:46.500 issued rwts: total=411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.500 job0: (groupid=0, jobs=1): err= 0: pid=1835426: Thu Dec 5 14:00:45 2024 00:29:46.500 read: IOPS=60, BW=60.3MiB/s (63.2MB/s)(652MiB/10821msec) 00:29:46.500 slat (usec): min=76, max=2148.5k, avg=16561.38, stdev=96116.68 00:29:46.500 clat (msec): min=20, max=4483, avg=1998.89, stdev=1040.08 00:29:46.500 lat (msec): min=788, max=4508, avg=2015.45, stdev=1041.92 00:29:46.500 clat percentiles (msec): 00:29:46.500 | 1.00th=[ 810], 5.00th=[ 835], 10.00th=[ 860], 20.00th=[ 1020], 00:29:46.500 | 30.00th=[ 1301], 40.00th=[ 1368], 50.00th=[ 1888], 60.00th=[ 2022], 00:29:46.500 | 70.00th=[ 2366], 80.00th=[ 2970], 90.00th=[ 3842], 95.00th=[ 4144], 00:29:46.500 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:29:46.500 | 99.99th=[ 4463] 00:29:46.500 bw ( KiB/s): min= 6144, max=159744, per=2.39%, avg=71488.00, stdev=42847.43, samples=15 00:29:46.500 iops : min= 6, max= 156, avg=69.73, stdev=41.83, samples=15 00:29:46.500 lat (msec) : 50=0.15%, 1000=17.94%, 2000=40.18%, >=2000=41.72% 00:29:46.500 cpu : usr=0.06%, sys=1.14%, ctx=1102, majf=0, minf=32769 00:29:46.500 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:29:46.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.500 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.500 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.500 job0: (groupid=0, jobs=1): err= 0: pid=1835427: Thu Dec 5 14:00:45 2024 00:29:46.500 read: IOPS=50, BW=50.9MiB/s (53.4MB/s)(515MiB/10108msec) 00:29:46.500 slat (usec): min=32, max=2117.9k, avg=19527.22, stdev=95036.91 00:29:46.500 clat (msec): min=49, max=4218, avg=2180.34, stdev=1145.62 00:29:46.500 lat (msec): min=124, max=4225, avg=2199.87, stdev=1150.12 00:29:46.500 clat percentiles (msec): 00:29:46.500 | 1.00th=[ 140], 5.00th=[ 558], 10.00th=[ 776], 20.00th=[ 1150], 00:29:46.500 | 30.00th=[ 1418], 40.00th=[ 1636], 50.00th=[ 2089], 60.00th=[ 2165], 00:29:46.500 | 70.00th=[ 2635], 80.00th=[ 3708], 90.00th=[ 3943], 95.00th=[ 4044], 00:29:46.501 | 99.00th=[ 4178], 99.50th=[ 4212], 99.90th=[ 4212], 99.95th=[ 4212], 00:29:46.501 | 99.99th=[ 4212] 00:29:46.501 bw ( KiB/s): min= 4096, max=126976, per=2.21%, avg=66057.92, stdev=30136.52, samples=12 00:29:46.501 iops : min= 4, max= 124, avg=64.50, stdev=29.43, samples=12 00:29:46.501 lat (msec) : 50=0.19%, 250=1.36%, 500=1.36%, 750=5.83%, 1000=7.18% 00:29:46.501 lat (msec) : 2000=32.04%, >=2000=52.04% 00:29:46.501 cpu : usr=0.03%, sys=0.97%, ctx=1181, majf=0, minf=32769 00:29:46.501 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.8% 00:29:46.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.501 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.501 issued rwts: total=515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.501 job1: (groupid=0, jobs=1): err= 0: pid=1835428: Thu Dec 5 14:00:45 2024 00:29:46.501 read: IOPS=4, BW=4145KiB/s (4244kB/s)(57.0MiB/14082msec) 00:29:46.501 slat (usec): min=750, max=2121.0k, avg=209618.29, stdev=600471.35 00:29:46.501 clat (msec): min=2132, max=14079, avg=11744.05, stdev=3587.29 00:29:46.501 lat (msec): min=4185, max=14081, avg=11953.66, stdev=3357.37 00:29:46.501 clat percentiles (msec): 00:29:46.501 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:29:46.501 | 30.00th=[12818], 40.00th=[13758], 50.00th=[13758], 60.00th=[13892], 00:29:46.501 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:29:46.501 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:29:46.501 | 99.99th=[14026] 00:29:46.501 lat (msec) : >=2000=100.00% 00:29:46.501 cpu : usr=0.00%, sys=0.33%, ctx=107, majf=0, minf=14593 00:29:46.501 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:29:46.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.501 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:46.501 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.501 job1: (groupid=0, jobs=1): err= 0: pid=1835429: Thu Dec 5 14:00:45 2024 00:29:46.501 read: IOPS=66, BW=66.8MiB/s (70.1MB/s)(731MiB/10937msec) 00:29:46.501 slat (usec): min=33, max=2097.9k, avg=14896.01, stdev=78977.30 00:29:46.501 clat (msec): min=45, max=4077, avg=1784.02, stdev=1041.00 00:29:46.501 lat (msec): min=396, max=4080, avg=1798.91, stdev=1042.07 00:29:46.501 clat percentiles (msec): 00:29:46.501 | 1.00th=[ 397], 5.00th=[ 401], 10.00th=[ 418], 20.00th=[ 472], 00:29:46.501 | 30.00th=[ 1083], 40.00th=[ 1435], 50.00th=[ 1921], 60.00th=[ 2165], 00:29:46.501 | 70.00th=[ 2433], 80.00th=[ 2601], 90.00th=[ 3138], 95.00th=[ 3708], 00:29:46.501 | 99.00th=[ 4010], 99.50th=[ 4010], 99.90th=[ 4077], 99.95th=[ 4077], 00:29:46.501 | 99.99th=[ 4077] 00:29:46.501 bw ( KiB/s): min=26624, max=313344, per=2.75%, avg=82324.13, stdev=85590.05, samples=15 00:29:46.501 iops : min= 26, max= 306, avg=80.33, stdev=83.62, samples=15 00:29:46.501 lat (msec) : 50=0.14%, 500=21.89%, 750=4.51%, 1000=1.37%, 2000=25.03% 00:29:46.501 lat (msec) : >=2000=47.06% 00:29:46.501 cpu : usr=0.03%, sys=1.36%, ctx=1665, majf=0, minf=32769 00:29:46.501 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:29:46.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.501 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.501 issued rwts: total=731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.501 job1: (groupid=0, jobs=1): err= 0: pid=1835430: Thu Dec 5 14:00:45 2024 00:29:46.501 read: IOPS=84, BW=84.5MiB/s (88.6MB/s)(920MiB/10886msec) 00:29:46.501 slat (usec): min=27, max=1545.8k, avg=11777.12, stdev=53230.40 00:29:46.501 clat (msec): min=47, max=3516, avg=1372.85, stdev=708.82 00:29:46.501 lat (msec): min=375, max=3519, avg=1384.63, stdev=710.40 00:29:46.501 clat percentiles (msec): 00:29:46.501 | 1.00th=[ 380], 5.00th=[ 422], 10.00th=[ 567], 20.00th=[ 785], 00:29:46.501 | 30.00th=[ 869], 40.00th=[ 1028], 50.00th=[ 1217], 60.00th=[ 1418], 00:29:46.501 | 70.00th=[ 1754], 80.00th=[ 1921], 90.00th=[ 2198], 95.00th=[ 2802], 00:29:46.501 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 3507], 99.95th=[ 3507], 00:29:46.501 | 99.99th=[ 3507] 00:29:46.501 bw ( KiB/s): min=12288, max=247808, per=3.39%, avg=101376.00, stdev=65952.81, samples=16 00:29:46.501 iops : min= 12, max= 242, avg=99.00, stdev=64.41, samples=16 00:29:46.501 lat (msec) : 50=0.11%, 500=7.61%, 750=8.70%, 1000=21.20%, 2000=46.41% 00:29:46.501 lat (msec) : >=2000=15.98% 00:29:46.501 cpu : usr=0.06%, sys=1.17%, ctx=1710, majf=0, minf=32769 00:29:46.501 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:29:46.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.501 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.501 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.501 job1: (groupid=0, jobs=1): err= 0: pid=1835431: Thu Dec 5 14:00:45 2024 00:29:46.501 read: IOPS=3, BW=3424KiB/s (3506kB/s)(47.0MiB/14057msec) 00:29:46.501 slat (usec): min=758, max=2098.0k, avg=253642.62, stdev=657868.07 00:29:46.501 clat (msec): min=2135, max=14055, avg=11102.31, stdev=3577.79 00:29:46.501 lat (msec): min=4190, max=14056, avg=11355.96, stdev=3343.12 00:29:46.501 clat percentiles (msec): 00:29:46.501 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:29:46.501 | 30.00th=[10671], 40.00th=[10671], 50.00th=[12818], 60.00th=[13892], 00:29:46.501 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:29:46.501 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:29:46.501 | 99.99th=[14026] 00:29:46.501 lat (msec) : >=2000=100.00% 00:29:46.501 cpu : usr=0.01%, sys=0.20%, ctx=68, majf=0, minf=12033 00:29:46.501 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:29:46.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.501 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:46.501 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.501 job1: (groupid=0, jobs=1): err= 0: pid=1835432: Thu Dec 5 14:00:45 2024 00:29:46.501 read: IOPS=62, BW=62.4MiB/s (65.5MB/s)(629MiB/10076msec) 00:29:46.501 slat (usec): min=63, max=2111.7k, avg=15898.61, stdev=116587.35 00:29:46.501 clat (msec): min=73, max=4946, avg=1838.47, stdev=1455.16 00:29:46.501 lat (msec): min=78, max=4947, avg=1854.37, stdev=1456.07 00:29:46.501 clat percentiles (msec): 00:29:46.501 | 1.00th=[ 146], 5.00th=[ 802], 10.00th=[ 818], 20.00th=[ 911], 00:29:46.501 | 30.00th=[ 986], 40.00th=[ 1020], 50.00th=[ 1267], 60.00th=[ 1469], 00:29:46.501 | 70.00th=[ 1536], 80.00th=[ 2735], 90.00th=[ 4732], 95.00th=[ 4866], 00:29:46.501 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:29:46.501 | 99.99th=[ 4933] 00:29:46.501 bw ( KiB/s): min= 8192, max=176128, per=3.12%, avg=93253.36, stdev=51750.00, samples=11 00:29:46.501 iops : min= 8, max= 172, avg=91.00, stdev=50.61, samples=11 00:29:46.501 lat (msec) : 100=0.32%, 250=1.75%, 500=1.43%, 1000=30.68%, 2000=44.99% 00:29:46.501 lat (msec) : >=2000=20.83% 00:29:46.501 cpu : usr=0.00%, sys=1.11%, ctx=1174, majf=0, minf=32769 00:29:46.501 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:29:46.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.501 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.501 issued rwts: total=629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.501 job1: (groupid=0, jobs=1): err= 0: pid=1835433: Thu Dec 5 14:00:45 2024 00:29:46.501 read: IOPS=77, BW=77.8MiB/s (81.6MB/s)(791MiB/10165msec) 00:29:46.501 slat (usec): min=47, max=118404, avg=12722.39, stdev=20924.55 00:29:46.501 clat (msec): min=98, max=3626, avg=1519.60, stdev=819.66 00:29:46.501 lat (msec): min=181, max=3628, avg=1532.32, stdev=822.12 00:29:46.501 clat percentiles (msec): 00:29:46.501 | 1.00th=[ 430], 5.00th=[ 701], 10.00th=[ 718], 20.00th=[ 793], 00:29:46.501 | 30.00th=[ 936], 40.00th=[ 1116], 50.00th=[ 1301], 60.00th=[ 1569], 00:29:46.501 | 70.00th=[ 1787], 80.00th=[ 1955], 90.00th=[ 2836], 95.00th=[ 3473], 00:29:46.501 | 99.00th=[ 3608], 99.50th=[ 3608], 99.90th=[ 3641], 99.95th=[ 3641], 00:29:46.501 | 99.99th=[ 3641] 00:29:46.501 bw ( KiB/s): min=16384, max=190464, per=2.67%, avg=79872.00, stdev=53791.69, samples=17 00:29:46.501 iops : min= 16, max= 186, avg=78.00, stdev=52.53, samples=17 00:29:46.501 lat (msec) : 100=0.13%, 250=0.51%, 500=1.26%, 750=13.40%, 1000=17.57% 00:29:46.501 lat (msec) : 2000=48.29%, >=2000=18.84% 00:29:46.501 cpu : usr=0.04%, sys=1.30%, ctx=1581, majf=0, minf=32331 00:29:46.501 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.0% 00:29:46.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.501 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.501 issued rwts: total=791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.501 job1: (groupid=0, jobs=1): err= 0: pid=1835434: Thu Dec 5 14:00:45 2024 00:29:46.501 read: IOPS=83, BW=83.3MiB/s (87.4MB/s)(840MiB/10078msec) 00:29:46.501 slat (usec): min=32, max=2139.5k, avg=11905.34, stdev=107929.10 00:29:46.501 clat (msec): min=74, max=5995, avg=787.65, stdev=823.32 00:29:46.501 lat (msec): min=81, max=6081, avg=799.56, stdev=843.67 00:29:46.501 clat percentiles (msec): 00:29:46.501 | 1.00th=[ 90], 5.00th=[ 292], 10.00th=[ 451], 20.00th=[ 502], 00:29:46.502 | 30.00th=[ 584], 40.00th=[ 651], 50.00th=[ 701], 60.00th=[ 735], 00:29:46.502 | 70.00th=[ 776], 80.00th=[ 818], 90.00th=[ 877], 95.00th=[ 936], 00:29:46.502 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 6007], 99.95th=[ 6007], 00:29:46.502 | 99.99th=[ 6007] 00:29:46.502 bw ( KiB/s): min=126976, max=309248, per=6.07%, avg=181467.00, stdev=61402.54, samples=8 00:29:46.502 iops : min= 124, max= 302, avg=177.00, stdev=60.08, samples=8 00:29:46.502 lat (msec) : 100=1.79%, 250=1.79%, 500=16.07%, 750=45.60%, 1000=32.02% 00:29:46.502 lat (msec) : >=2000=2.74% 00:29:46.502 cpu : usr=0.02%, sys=1.07%, ctx=783, majf=0, minf=32769 00:29:46.502 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:29:46.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.502 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.502 issued rwts: total=840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.502 job1: (groupid=0, jobs=1): err= 0: pid=1835435: Thu Dec 5 14:00:45 2024 00:29:46.502 read: IOPS=68, BW=68.8MiB/s (72.1MB/s)(971MiB/14118msec) 00:29:46.502 slat (usec): min=85, max=4248.1k, avg=12334.79, stdev=151301.59 00:29:46.502 clat (msec): min=252, max=8821, avg=1793.39, stdev=2662.40 00:29:46.502 lat (msec): min=255, max=8823, avg=1805.72, stdev=2670.21 00:29:46.502 clat percentiles (msec): 00:29:46.502 | 1.00th=[ 275], 5.00th=[ 326], 10.00th=[ 368], 20.00th=[ 393], 00:29:46.502 | 30.00th=[ 460], 40.00th=[ 542], 50.00th=[ 567], 60.00th=[ 1167], 00:29:46.502 | 70.00th=[ 1267], 80.00th=[ 1418], 90.00th=[ 8658], 95.00th=[ 8658], 00:29:46.502 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:29:46.502 | 99.99th=[ 8792] 00:29:46.502 bw ( KiB/s): min= 2052, max=403456, per=4.82%, avg=144043.00, stdev=126347.43, samples=12 00:29:46.502 iops : min= 2, max= 394, avg=140.67, stdev=123.39, samples=12 00:29:46.502 lat (msec) : 500=33.26%, 750=20.08%, 1000=2.99%, 2000=29.97%, >=2000=13.70% 00:29:46.502 cpu : usr=0.01%, sys=1.08%, ctx=1642, majf=0, minf=32769 00:29:46.502 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:29:46.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.502 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.502 issued rwts: total=971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.502 job1: (groupid=0, jobs=1): err= 0: pid=1835436: Thu Dec 5 14:00:45 2024 00:29:46.502 read: IOPS=69, BW=69.7MiB/s (73.1MB/s)(704MiB/10101msec) 00:29:46.502 slat (usec): min=347, max=181477, avg=14202.37, stdev=20152.96 00:29:46.502 clat (msec): min=99, max=3527, avg=1691.60, stdev=803.87 00:29:46.502 lat (msec): min=176, max=3549, avg=1705.80, stdev=804.92 00:29:46.502 clat percentiles (msec): 00:29:46.502 | 1.00th=[ 309], 5.00th=[ 489], 10.00th=[ 852], 20.00th=[ 1099], 00:29:46.502 | 30.00th=[ 1267], 40.00th=[ 1401], 50.00th=[ 1586], 60.00th=[ 1754], 00:29:46.502 | 70.00th=[ 1838], 80.00th=[ 2039], 90.00th=[ 3306], 95.00th=[ 3406], 00:29:46.502 | 99.00th=[ 3473], 99.50th=[ 3473], 99.90th=[ 3540], 99.95th=[ 3540], 00:29:46.502 | 99.99th=[ 3540] 00:29:46.502 bw ( KiB/s): min= 8192, max=151552, per=2.19%, avg=65529.89, stdev=39368.60, samples=18 00:29:46.502 iops : min= 8, max= 148, avg=63.89, stdev=38.47, samples=18 00:29:46.502 lat (msec) : 100=0.14%, 250=0.85%, 500=4.55%, 750=3.55%, 1000=6.11% 00:29:46.502 lat (msec) : 2000=64.20%, >=2000=20.60% 00:29:46.502 cpu : usr=0.02%, sys=1.23%, ctx=1865, majf=0, minf=32769 00:29:46.502 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:29:46.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.502 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.502 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.502 job1: (groupid=0, jobs=1): err= 0: pid=1835437: Thu Dec 5 14:00:45 2024 00:29:46.502 read: IOPS=36, BW=36.3MiB/s (38.1MB/s)(508MiB/13999msec) 00:29:46.502 slat (usec): min=38, max=2161.4k, avg=23359.90, stdev=192137.36 00:29:46.502 clat (msec): min=383, max=6909, avg=2382.20, stdev=2670.60 00:29:46.502 lat (msec): min=387, max=6933, avg=2405.56, stdev=2680.30 00:29:46.502 clat percentiles (msec): 00:29:46.502 | 1.00th=[ 388], 5.00th=[ 422], 10.00th=[ 464], 20.00th=[ 510], 00:29:46.502 | 30.00th=[ 567], 40.00th=[ 592], 50.00th=[ 634], 60.00th=[ 701], 00:29:46.502 | 70.00th=[ 2869], 80.00th=[ 6477], 90.00th=[ 6611], 95.00th=[ 6745], 00:29:46.502 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:29:46.502 | 99.99th=[ 6879] 00:29:46.502 bw ( KiB/s): min= 2048, max=301056, per=3.73%, avg=111470.29, stdev=124472.54, samples=7 00:29:46.502 iops : min= 2, max= 294, avg=108.86, stdev=121.56, samples=7 00:29:46.502 lat (msec) : 500=15.16%, 750=49.02%, 1000=2.76%, >=2000=33.07% 00:29:46.502 cpu : usr=0.02%, sys=0.57%, ctx=701, majf=0, minf=32769 00:29:46.502 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:29:46.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.502 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.502 issued rwts: total=508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.502 job1: (groupid=0, jobs=1): err= 0: pid=1835438: Thu Dec 5 14:00:45 2024 00:29:46.502 read: IOPS=90, BW=90.6MiB/s (95.0MB/s)(915MiB/10099msec) 00:29:46.502 slat (usec): min=416, max=1544.6k, avg=10927.41, stdev=52312.53 00:29:46.502 clat (msec): min=97, max=2963, avg=1257.78, stdev=723.06 00:29:46.502 lat (msec): min=100, max=2977, avg=1268.71, stdev=723.50 00:29:46.502 clat percentiles (msec): 00:29:46.502 | 1.00th=[ 224], 5.00th=[ 472], 10.00th=[ 477], 20.00th=[ 518], 00:29:46.502 | 30.00th=[ 651], 40.00th=[ 986], 50.00th=[ 1167], 60.00th=[ 1401], 00:29:46.502 | 70.00th=[ 1552], 80.00th=[ 1653], 90.00th=[ 2735], 95.00th=[ 2869], 00:29:46.502 | 99.00th=[ 2937], 99.50th=[ 2970], 99.90th=[ 2970], 99.95th=[ 2970], 00:29:46.502 | 99.99th=[ 2970] 00:29:46.502 bw ( KiB/s): min=22483, max=276480, per=3.60%, avg=107575.13, stdev=86231.45, samples=15 00:29:46.502 iops : min= 21, max= 270, avg=104.93, stdev=84.30, samples=15 00:29:46.502 lat (msec) : 100=0.11%, 250=1.09%, 500=17.70%, 750=12.02%, 1000=10.16% 00:29:46.502 lat (msec) : 2000=46.12%, >=2000=12.79% 00:29:46.502 cpu : usr=0.04%, sys=1.17%, ctx=2016, majf=0, minf=32769 00:29:46.502 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:29:46.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.502 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.502 issued rwts: total=915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.502 job1: (groupid=0, jobs=1): err= 0: pid=1835439: Thu Dec 5 14:00:45 2024 00:29:46.502 read: IOPS=64, BW=64.9MiB/s (68.1MB/s)(654MiB/10076msec) 00:29:46.502 slat (usec): min=49, max=2129.2k, avg=15289.06, stdev=119307.06 00:29:46.502 clat (msec): min=73, max=5315, avg=1753.40, stdev=1730.17 00:29:46.502 lat (msec): min=75, max=5316, avg=1768.69, stdev=1732.56 00:29:46.502 clat percentiles (msec): 00:29:46.502 | 1.00th=[ 146], 5.00th=[ 347], 10.00th=[ 363], 20.00th=[ 447], 00:29:46.502 | 30.00th=[ 617], 40.00th=[ 693], 50.00th=[ 961], 60.00th=[ 1519], 00:29:46.502 | 70.00th=[ 1821], 80.00th=[ 2400], 90.00th=[ 5134], 95.00th=[ 5269], 00:29:46.502 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:29:46.502 | 99.99th=[ 5336] 00:29:46.502 bw ( KiB/s): min= 4096, max=372736, per=3.61%, avg=107805.70, stdev=117291.68, samples=10 00:29:46.502 iops : min= 4, max= 364, avg=105.20, stdev=114.60, samples=10 00:29:46.502 lat (msec) : 100=0.76%, 250=0.76%, 500=21.56%, 750=21.71%, 1000=6.27% 00:29:46.502 lat (msec) : 2000=28.59%, >=2000=20.34% 00:29:46.502 cpu : usr=0.03%, sys=1.02%, ctx=1114, majf=0, minf=32769 00:29:46.502 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:29:46.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.502 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.502 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.502 job1: (groupid=0, jobs=1): err= 0: pid=1835440: Thu Dec 5 14:00:45 2024 00:29:46.502 read: IOPS=54, BW=54.0MiB/s (56.6MB/s)(585MiB/10831msec) 00:29:46.502 slat (usec): min=419, max=2100.2k, avg=18438.60, stdev=88359.11 00:29:46.502 clat (msec): min=42, max=3687, avg=2153.28, stdev=532.19 00:29:46.502 lat (msec): min=1309, max=3706, avg=2171.72, stdev=524.91 00:29:46.502 clat percentiles (msec): 00:29:46.502 | 1.00th=[ 1318], 5.00th=[ 1368], 10.00th=[ 1552], 20.00th=[ 1687], 00:29:46.502 | 30.00th=[ 1787], 40.00th=[ 2005], 50.00th=[ 2072], 60.00th=[ 2232], 00:29:46.502 | 70.00th=[ 2433], 80.00th=[ 2567], 90.00th=[ 2836], 95.00th=[ 3205], 00:29:46.502 | 99.00th=[ 3641], 99.50th=[ 3675], 99.90th=[ 3675], 99.95th=[ 3675], 00:29:46.502 | 99.99th=[ 3675] 00:29:46.502 bw ( KiB/s): min=20480, max=120832, per=2.09%, avg=62384.53, stdev=27136.75, samples=15 00:29:46.502 iops : min= 20, max= 118, avg=60.87, stdev=26.45, samples=15 00:29:46.502 lat (msec) : 50=0.17%, 2000=39.32%, >=2000=60.51% 00:29:46.502 cpu : usr=0.03%, sys=0.99%, ctx=1476, majf=0, minf=32769 00:29:46.502 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:29:46.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.502 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.502 issued rwts: total=585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.502 job2: (groupid=0, jobs=1): err= 0: pid=1835442: Thu Dec 5 14:00:45 2024 00:29:46.502 read: IOPS=3, BW=4093KiB/s (4192kB/s)(56.0MiB/14009msec) 00:29:46.502 slat (usec): min=582, max=2094.7k, avg=212022.47, stdev=607585.91 00:29:46.502 clat (msec): min=2135, max=14006, avg=9959.55, stdev=3746.27 00:29:46.502 lat (msec): min=4177, max=14008, avg=10171.57, stdev=3629.56 00:29:46.502 clat percentiles (msec): 00:29:46.502 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:29:46.502 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12818], 00:29:46.502 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:29:46.502 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:29:46.503 | 99.99th=[14026] 00:29:46.503 lat (msec) : >=2000=100.00% 00:29:46.503 cpu : usr=0.00%, sys=0.25%, ctx=56, majf=0, minf=14337 00:29:46.503 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:29:46.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.503 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:46.503 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.503 job2: (groupid=0, jobs=1): err= 0: pid=1835443: Thu Dec 5 14:00:45 2024 00:29:46.503 read: IOPS=34, BW=34.8MiB/s (36.5MB/s)(380MiB/10927msec) 00:29:46.503 slat (usec): min=539, max=2140.6k, avg=28613.04, stdev=161190.09 00:29:46.503 clat (msec): min=51, max=6094, avg=3077.05, stdev=1620.61 00:29:46.503 lat (msec): min=1336, max=6123, avg=3105.66, stdev=1614.08 00:29:46.503 clat percentiles (msec): 00:29:46.503 | 1.00th=[ 1334], 5.00th=[ 1401], 10.00th=[ 1519], 20.00th=[ 1603], 00:29:46.503 | 30.00th=[ 1653], 40.00th=[ 1838], 50.00th=[ 2106], 60.00th=[ 3540], 00:29:46.503 | 70.00th=[ 4396], 80.00th=[ 5000], 90.00th=[ 5537], 95.00th=[ 5738], 00:29:46.503 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:29:46.503 | 99.99th=[ 6074] 00:29:46.503 bw ( KiB/s): min= 4096, max=106496, per=1.92%, avg=57344.00, stdev=32446.42, samples=9 00:29:46.503 iops : min= 4, max= 104, avg=56.00, stdev=31.69, samples=9 00:29:46.503 lat (msec) : 100=0.26%, 2000=47.63%, >=2000=52.11% 00:29:46.503 cpu : usr=0.04%, sys=1.08%, ctx=884, majf=0, minf=32769 00:29:46.503 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:29:46.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.503 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:46.503 issued rwts: total=380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.503 job2: (groupid=0, jobs=1): err= 0: pid=1835444: Thu Dec 5 14:00:45 2024 00:29:46.503 read: IOPS=24, BW=25.0MiB/s (26.2MB/s)(273MiB/10934msec) 00:29:46.503 slat (usec): min=45, max=2092.1k, avg=39866.88, stdev=186292.56 00:29:46.503 clat (msec): min=48, max=6830, avg=3478.51, stdev=1747.10 00:29:46.503 lat (msec): min=1070, max=6843, avg=3518.38, stdev=1755.74 00:29:46.503 clat percentiles (msec): 00:29:46.503 | 1.00th=[ 1083], 5.00th=[ 1183], 10.00th=[ 1301], 20.00th=[ 1636], 00:29:46.503 | 30.00th=[ 2072], 40.00th=[ 3171], 50.00th=[ 3272], 60.00th=[ 3339], 00:29:46.503 | 70.00th=[ 3910], 80.00th=[ 5201], 90.00th=[ 6611], 95.00th=[ 6745], 00:29:46.503 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:29:46.503 | 99.99th=[ 6812] 00:29:46.503 bw ( KiB/s): min=30720, max=77824, per=1.99%, avg=59369.80, stdev=17919.10, samples=5 00:29:46.503 iops : min= 30, max= 76, avg=57.80, stdev=17.56, samples=5 00:29:46.503 lat (msec) : 50=0.37%, 2000=27.84%, >=2000=71.79% 00:29:46.503 cpu : usr=0.02%, sys=0.83%, ctx=647, majf=0, minf=32769 00:29:46.503 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.7%, >=64=76.9% 00:29:46.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.503 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:29:46.503 issued rwts: total=273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.503 job2: (groupid=0, jobs=1): err= 0: pid=1835445: Thu Dec 5 14:00:45 2024 00:29:46.503 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(411MiB/10113msec) 00:29:46.503 slat (usec): min=49, max=2176.4k, avg=24334.75, stdev=179184.18 00:29:46.503 clat (msec): min=110, max=5852, avg=2728.33, stdev=1864.18 00:29:46.503 lat (msec): min=200, max=5863, avg=2752.66, stdev=1863.18 00:29:46.503 clat percentiles (msec): 00:29:46.503 | 1.00th=[ 230], 5.00th=[ 885], 10.00th=[ 894], 20.00th=[ 944], 00:29:46.503 | 30.00th=[ 995], 40.00th=[ 1062], 50.00th=[ 2635], 60.00th=[ 2869], 00:29:46.503 | 70.00th=[ 3171], 80.00th=[ 5269], 90.00th=[ 5738], 95.00th=[ 5738], 00:29:46.503 | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:29:46.503 | 99.99th=[ 5873] 00:29:46.503 bw ( KiB/s): min=10240, max=141312, per=3.24%, avg=96938.67, stdev=52240.73, samples=6 00:29:46.503 iops : min= 10, max= 138, avg=94.67, stdev=51.02, samples=6 00:29:46.503 lat (msec) : 250=1.22%, 1000=34.79%, 2000=6.57%, >=2000=57.42% 00:29:46.503 cpu : usr=0.01%, sys=0.98%, ctx=525, majf=0, minf=32769 00:29:46.503 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.8%, >=64=84.7% 00:29:46.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.503 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:46.503 issued rwts: total=411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.503 job2: (groupid=0, jobs=1): err= 0: pid=1835446: Thu Dec 5 14:00:45 2024 00:29:46.503 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(441MiB/10863msec) 00:29:46.503 slat (usec): min=77, max=2111.9k, avg=24516.52, stdev=167978.48 00:29:46.503 clat (msec): min=49, max=7621, avg=2930.74, stdev=1714.92 00:29:46.503 lat (msec): min=1011, max=7627, avg=2955.26, stdev=1720.89 00:29:46.503 clat percentiles (msec): 00:29:46.503 | 1.00th=[ 1011], 5.00th=[ 1083], 10.00th=[ 1133], 20.00th=[ 1267], 00:29:46.503 | 30.00th=[ 1385], 40.00th=[ 1485], 50.00th=[ 3507], 60.00th=[ 3608], 00:29:46.503 | 70.00th=[ 3742], 80.00th=[ 4463], 90.00th=[ 5067], 95.00th=[ 5537], 00:29:46.503 | 99.00th=[ 7550], 99.50th=[ 7617], 99.90th=[ 7617], 99.95th=[ 7617], 00:29:46.503 | 99.99th=[ 7617] 00:29:46.503 bw ( KiB/s): min= 6131, max=108761, per=2.15%, avg=64122.80, stdev=36782.17, samples=10 00:29:46.503 iops : min= 5, max= 106, avg=62.50, stdev=36.07, samples=10 00:29:46.503 lat (msec) : 50=0.23%, 2000=45.12%, >=2000=54.65% 00:29:46.503 cpu : usr=0.00%, sys=0.90%, ctx=664, majf=0, minf=32769 00:29:46.503 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.7% 00:29:46.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.503 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.503 issued rwts: total=441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.503 job2: (groupid=0, jobs=1): err= 0: pid=1835447: Thu Dec 5 14:00:45 2024 00:29:46.503 read: IOPS=46, BW=46.2MiB/s (48.4MB/s)(502MiB/10870msec) 00:29:46.503 slat (usec): min=52, max=2158.2k, avg=21592.77, stdev=144190.90 00:29:46.503 clat (msec): min=27, max=4706, avg=2604.34, stdev=1168.94 00:29:46.503 lat (msec): min=1106, max=4707, avg=2625.93, stdev=1164.33 00:29:46.503 clat percentiles (msec): 00:29:46.503 | 1.00th=[ 1250], 5.00th=[ 1351], 10.00th=[ 1385], 20.00th=[ 1418], 00:29:46.503 | 30.00th=[ 1485], 40.00th=[ 1536], 50.00th=[ 3171], 60.00th=[ 3272], 00:29:46.503 | 70.00th=[ 3406], 80.00th=[ 3641], 90.00th=[ 4329], 95.00th=[ 4597], 00:29:46.503 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:29:46.503 | 99.99th=[ 4732] 00:29:46.503 bw ( KiB/s): min=20039, max=137216, per=2.33%, avg=69591.91, stdev=37520.22, samples=11 00:29:46.503 iops : min= 19, max= 134, avg=67.91, stdev=36.72, samples=11 00:29:46.503 lat (msec) : 50=0.20%, 2000=47.41%, >=2000=52.39% 00:29:46.503 cpu : usr=0.02%, sys=0.99%, ctx=862, majf=0, minf=32769 00:29:46.503 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:29:46.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.503 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.503 issued rwts: total=502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.503 job2: (groupid=0, jobs=1): err= 0: pid=1835448: Thu Dec 5 14:00:45 2024 00:29:46.503 read: IOPS=38, BW=38.4MiB/s (40.3MB/s)(419MiB/10898msec) 00:29:46.503 slat (usec): min=41, max=2172.1k, avg=25886.54, stdev=154229.54 00:29:46.503 clat (msec): min=49, max=5758, avg=2697.64, stdev=1520.36 00:29:46.503 lat (msec): min=1508, max=5763, avg=2723.52, stdev=1516.18 00:29:46.503 clat percentiles (msec): 00:29:46.503 | 1.00th=[ 1519], 5.00th=[ 1536], 10.00th=[ 1552], 20.00th=[ 1603], 00:29:46.503 | 30.00th=[ 1620], 40.00th=[ 1653], 50.00th=[ 1770], 60.00th=[ 1871], 00:29:46.503 | 70.00th=[ 3742], 80.00th=[ 4597], 90.00th=[ 5201], 95.00th=[ 5604], 00:29:46.503 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:29:46.503 | 99.99th=[ 5738] 00:29:46.503 bw ( KiB/s): min=14336, max=108544, per=2.22%, avg=66218.67, stdev=28211.15, samples=9 00:29:46.503 iops : min= 14, max= 106, avg=64.67, stdev=27.55, samples=9 00:29:46.503 lat (msec) : 50=0.24%, 2000=64.68%, >=2000=35.08% 00:29:46.503 cpu : usr=0.05%, sys=0.81%, ctx=948, majf=0, minf=32769 00:29:46.503 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:29:46.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.503 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.503 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.503 job2: (groupid=0, jobs=1): err= 0: pid=1835449: Thu Dec 5 14:00:45 2024 00:29:46.503 read: IOPS=61, BW=61.2MiB/s (64.2MB/s)(728MiB/11889msec) 00:29:46.503 slat (usec): min=37, max=2131.2k, avg=13737.95, stdev=111321.40 00:29:46.503 clat (msec): min=639, max=7178, avg=1995.62, stdev=2177.31 00:29:46.503 lat (msec): min=640, max=7189, avg=2009.36, stdev=2183.06 00:29:46.503 clat percentiles (msec): 00:29:46.503 | 1.00th=[ 651], 5.00th=[ 659], 10.00th=[ 684], 20.00th=[ 718], 00:29:46.503 | 30.00th=[ 768], 40.00th=[ 936], 50.00th=[ 1116], 60.00th=[ 1234], 00:29:46.503 | 70.00th=[ 1368], 80.00th=[ 1485], 90.00th=[ 6745], 95.00th=[ 6946], 00:29:46.503 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:29:46.503 | 99.99th=[ 7148] 00:29:46.503 bw ( KiB/s): min= 8192, max=194560, per=3.42%, avg=102133.33, stdev=65644.97, samples=12 00:29:46.503 iops : min= 8, max= 190, avg=99.67, stdev=64.21, samples=12 00:29:46.503 lat (msec) : 750=27.61%, 1000=14.15%, 2000=39.84%, >=2000=18.41% 00:29:46.503 cpu : usr=0.02%, sys=0.89%, ctx=828, majf=0, minf=32769 00:29:46.503 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:29:46.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.503 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.503 issued rwts: total=728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.503 job2: (groupid=0, jobs=1): err= 0: pid=1835450: Thu Dec 5 14:00:45 2024 00:29:46.504 read: IOPS=22, BW=22.2MiB/s (23.3MB/s)(225MiB/10118msec) 00:29:46.504 slat (usec): min=344, max=2090.2k, avg=44704.69, stdev=236339.70 00:29:46.504 clat (msec): min=57, max=8831, avg=4860.78, stdev=2244.39 00:29:46.504 lat (msec): min=126, max=8831, avg=4905.49, stdev=2240.31 00:29:46.504 clat percentiles (msec): 00:29:46.504 | 1.00th=[ 133], 5.00th=[ 275], 10.00th=[ 2500], 20.00th=[ 2769], 00:29:46.504 | 30.00th=[ 3406], 40.00th=[ 4044], 50.00th=[ 4665], 60.00th=[ 6007], 00:29:46.504 | 70.00th=[ 6275], 80.00th=[ 6409], 90.00th=[ 8658], 95.00th=[ 8792], 00:29:46.504 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:29:46.504 | 99.99th=[ 8792] 00:29:46.504 bw ( KiB/s): min=10240, max=55296, per=1.10%, avg=33024.83, stdev=15214.78, samples=6 00:29:46.504 iops : min= 10, max= 54, avg=32.17, stdev=14.89, samples=6 00:29:46.504 lat (msec) : 100=0.44%, 250=3.11%, 500=3.11%, >=2000=93.33% 00:29:46.504 cpu : usr=0.00%, sys=0.92%, ctx=542, majf=0, minf=32769 00:29:46.504 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.2%, >=64=72.0% 00:29:46.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.504 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:29:46.504 issued rwts: total=225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.504 job2: (groupid=0, jobs=1): err= 0: pid=1835451: Thu Dec 5 14:00:45 2024 00:29:46.504 read: IOPS=41, BW=41.4MiB/s (43.4MB/s)(451MiB/10901msec) 00:29:46.504 slat (usec): min=31, max=2174.1k, avg=24056.76, stdev=149058.21 00:29:46.504 clat (msec): min=49, max=5397, avg=2398.13, stdev=1646.86 00:29:46.504 lat (msec): min=1017, max=5403, avg=2422.18, stdev=1644.84 00:29:46.504 clat percentiles (msec): 00:29:46.504 | 1.00th=[ 1020], 5.00th=[ 1036], 10.00th=[ 1099], 20.00th=[ 1217], 00:29:46.504 | 30.00th=[ 1267], 40.00th=[ 1301], 50.00th=[ 1334], 60.00th=[ 1452], 00:29:46.504 | 70.00th=[ 3440], 80.00th=[ 4799], 90.00th=[ 5067], 95.00th=[ 5269], 00:29:46.504 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:29:46.504 | 99.99th=[ 5403] 00:29:46.504 bw ( KiB/s): min= 6144, max=155648, per=2.46%, avg=73500.44, stdev=52067.60, samples=9 00:29:46.504 iops : min= 6, max= 152, avg=71.78, stdev=50.85, samples=9 00:29:46.504 lat (msec) : 50=0.22%, 1000=0.22%, 2000=64.08%, >=2000=35.48% 00:29:46.504 cpu : usr=0.05%, sys=0.88%, ctx=948, majf=0, minf=32769 00:29:46.504 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.0% 00:29:46.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.504 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.504 issued rwts: total=451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.504 job2: (groupid=0, jobs=1): err= 0: pid=1835452: Thu Dec 5 14:00:45 2024 00:29:46.504 read: IOPS=39, BW=39.4MiB/s (41.3MB/s)(398MiB/10113msec) 00:29:46.504 slat (usec): min=56, max=2164.4k, avg=25142.88, stdev=149853.62 00:29:46.504 clat (msec): min=103, max=7182, avg=2377.32, stdev=2290.75 00:29:46.504 lat (msec): min=113, max=7200, avg=2402.46, stdev=2301.73 00:29:46.504 clat percentiles (msec): 00:29:46.504 | 1.00th=[ 134], 5.00th=[ 510], 10.00th=[ 869], 20.00th=[ 936], 00:29:46.504 | 30.00th=[ 1045], 40.00th=[ 1183], 50.00th=[ 1385], 60.00th=[ 1536], 00:29:46.504 | 70.00th=[ 1787], 80.00th=[ 4530], 90.00th=[ 6879], 95.00th=[ 7013], 00:29:46.504 | 99.00th=[ 7148], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:29:46.504 | 99.99th=[ 7215] 00:29:46.504 bw ( KiB/s): min=18432, max=143360, per=2.65%, avg=79108.86, stdev=45837.78, samples=7 00:29:46.504 iops : min= 18, max= 140, avg=77.14, stdev=44.85, samples=7 00:29:46.504 lat (msec) : 250=1.26%, 500=3.52%, 750=3.02%, 1000=17.84%, 2000=48.49% 00:29:46.504 lat (msec) : >=2000=25.88% 00:29:46.504 cpu : usr=0.04%, sys=1.16%, ctx=812, majf=0, minf=32769 00:29:46.504 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:29:46.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.504 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:46.504 issued rwts: total=398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.504 job2: (groupid=0, jobs=1): err= 0: pid=1835453: Thu Dec 5 14:00:45 2024 00:29:46.504 read: IOPS=135, BW=136MiB/s (142MB/s)(1363MiB/10057msec) 00:29:46.504 slat (usec): min=33, max=110877, avg=7360.49, stdev=11813.08 00:29:46.504 clat (msec): min=17, max=1673, avg=886.01, stdev=358.47 00:29:46.504 lat (msec): min=77, max=1683, avg=893.37, stdev=360.61 00:29:46.504 clat percentiles (msec): 00:29:46.504 | 1.00th=[ 105], 5.00th=[ 489], 10.00th=[ 567], 20.00th=[ 617], 00:29:46.504 | 30.00th=[ 651], 40.00th=[ 693], 50.00th=[ 726], 60.00th=[ 852], 00:29:46.504 | 70.00th=[ 1099], 80.00th=[ 1234], 90.00th=[ 1469], 95.00th=[ 1569], 00:29:46.504 | 99.00th=[ 1653], 99.50th=[ 1653], 99.90th=[ 1670], 99.95th=[ 1670], 00:29:46.504 | 99.99th=[ 1670] 00:29:46.504 bw ( KiB/s): min=16384, max=233472, per=4.40%, avg=131527.11, stdev=59905.13, samples=18 00:29:46.504 iops : min= 16, max= 228, avg=128.44, stdev=58.50, samples=18 00:29:46.504 lat (msec) : 20=0.07%, 100=0.81%, 250=1.39%, 500=3.01%, 750=47.25% 00:29:46.504 lat (msec) : 1000=13.72%, 2000=33.75% 00:29:46.504 cpu : usr=0.03%, sys=1.77%, ctx=1587, majf=0, minf=32769 00:29:46.504 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:29:46.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.504 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.504 issued rwts: total=1363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.504 job2: (groupid=0, jobs=1): err= 0: pid=1835454: Thu Dec 5 14:00:45 2024 00:29:46.504 read: IOPS=68, BW=68.1MiB/s (71.4MB/s)(738MiB/10841msec) 00:29:46.504 slat (usec): min=54, max=2109.6k, avg=14646.22, stdev=89099.59 00:29:46.504 clat (msec): min=27, max=4104, avg=1785.05, stdev=1083.80 00:29:46.504 lat (msec): min=850, max=4114, avg=1799.70, stdev=1085.41 00:29:46.504 clat percentiles (msec): 00:29:46.504 | 1.00th=[ 860], 5.00th=[ 877], 10.00th=[ 894], 20.00th=[ 919], 00:29:46.504 | 30.00th=[ 1133], 40.00th=[ 1301], 50.00th=[ 1351], 60.00th=[ 1385], 00:29:46.504 | 70.00th=[ 1703], 80.00th=[ 2433], 90.00th=[ 4010], 95.00th=[ 4044], 00:29:46.504 | 99.00th=[ 4077], 99.50th=[ 4111], 99.90th=[ 4111], 99.95th=[ 4111], 00:29:46.504 | 99.99th=[ 4111] 00:29:46.504 bw ( KiB/s): min= 4087, max=151552, per=2.78%, avg=83202.80, stdev=45497.27, samples=15 00:29:46.504 iops : min= 3, max= 148, avg=81.13, stdev=44.62, samples=15 00:29:46.504 lat (msec) : 50=0.14%, 1000=23.98%, 2000=48.78%, >=2000=27.10% 00:29:46.504 cpu : usr=0.04%, sys=1.21%, ctx=1117, majf=0, minf=32769 00:29:46.504 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:29:46.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.504 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.504 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.504 job3: (groupid=0, jobs=1): err= 0: pid=1835455: Thu Dec 5 14:00:45 2024 00:29:46.504 read: IOPS=30, BW=30.1MiB/s (31.6MB/s)(304MiB/10083msec) 00:29:46.504 slat (usec): min=423, max=2119.5k, avg=32900.72, stdev=202322.80 00:29:46.504 clat (msec): min=80, max=6244, avg=3720.05, stdev=2008.36 00:29:46.504 lat (msec): min=84, max=6248, avg=3752.95, stdev=1994.55 00:29:46.504 clat percentiles (msec): 00:29:46.504 | 1.00th=[ 92], 5.00th=[ 894], 10.00th=[ 1036], 20.00th=[ 1435], 00:29:46.504 | 30.00th=[ 1989], 40.00th=[ 3373], 50.00th=[ 3440], 60.00th=[ 4463], 00:29:46.504 | 70.00th=[ 5738], 80.00th=[ 5873], 90.00th=[ 6074], 95.00th=[ 6141], 00:29:46.504 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6275], 99.95th=[ 6275], 00:29:46.504 | 99.99th=[ 6275] 00:29:46.504 bw ( KiB/s): min= 4096, max=184320, per=2.02%, avg=60416.00, stdev=70252.67, samples=6 00:29:46.504 iops : min= 4, max= 180, avg=59.00, stdev=68.61, samples=6 00:29:46.504 lat (msec) : 100=1.64%, 250=2.96%, 1000=5.26%, 2000=20.39%, >=2000=69.74% 00:29:46.504 cpu : usr=0.00%, sys=0.96%, ctx=725, majf=0, minf=32769 00:29:46.504 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.5%, >=64=79.3% 00:29:46.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.504 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:29:46.504 issued rwts: total=304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.504 job3: (groupid=0, jobs=1): err= 0: pid=1835456: Thu Dec 5 14:00:45 2024 00:29:46.504 read: IOPS=61, BW=61.6MiB/s (64.6MB/s)(621MiB/10073msec) 00:29:46.504 slat (usec): min=44, max=2119.8k, avg=16102.89, stdev=135858.60 00:29:46.504 clat (msec): min=71, max=4869, avg=1546.03, stdev=1735.18 00:29:46.504 lat (msec): min=72, max=4870, avg=1562.13, stdev=1740.13 00:29:46.504 clat percentiles (msec): 00:29:46.505 | 1.00th=[ 81], 5.00th=[ 338], 10.00th=[ 363], 20.00th=[ 363], 00:29:46.505 | 30.00th=[ 368], 40.00th=[ 384], 50.00th=[ 558], 60.00th=[ 911], 00:29:46.505 | 70.00th=[ 1284], 80.00th=[ 3708], 90.00th=[ 4732], 95.00th=[ 4799], 00:29:46.505 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:29:46.505 | 99.99th=[ 4866] 00:29:46.505 bw ( KiB/s): min= 6144, max=358400, per=4.83%, avg=144291.57, stdev=146799.74, samples=7 00:29:46.505 iops : min= 6, max= 350, avg=140.86, stdev=143.40, samples=7 00:29:46.505 lat (msec) : 100=2.25%, 250=1.13%, 500=43.32%, 750=9.50%, 1000=7.73% 00:29:46.505 lat (msec) : 2000=11.11%, >=2000=24.96% 00:29:46.505 cpu : usr=0.01%, sys=0.85%, ctx=1043, majf=0, minf=32769 00:29:46.505 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.9% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.505 issued rwts: total=621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.505 job3: (groupid=0, jobs=1): err= 0: pid=1835457: Thu Dec 5 14:00:45 2024 00:29:46.505 read: IOPS=1, BW=1317KiB/s (1349kB/s)(18.0MiB/13996msec) 00:29:46.505 slat (msec): min=2, max=3168, avg=598.46, stdev=1031.16 00:29:46.505 clat (msec): min=3222, max=13912, avg=11202.24, stdev=3027.08 00:29:46.505 lat (msec): min=6390, max=13995, avg=11800.70, stdev=2344.65 00:29:46.505 clat percentiles (msec): 00:29:46.505 | 1.00th=[ 3239], 5.00th=[ 3239], 10.00th=[ 6409], 20.00th=[ 8557], 00:29:46.505 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:29:46.505 | 70.00th=[12818], 80.00th=[13892], 90.00th=[13892], 95.00th=[13892], 00:29:46.505 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:29:46.505 | 99.99th=[13892] 00:29:46.505 lat (msec) : >=2000=100.00% 00:29:46.505 cpu : usr=0.00%, sys=0.06%, ctx=50, majf=0, minf=4609 00:29:46.505 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:29:46.505 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.505 job3: (groupid=0, jobs=1): err= 0: pid=1835458: Thu Dec 5 14:00:45 2024 00:29:46.505 read: IOPS=13, BW=13.6MiB/s (14.2MB/s)(162MiB/11924msec) 00:29:46.505 slat (usec): min=355, max=2082.0k, avg=73189.59, stdev=321636.26 00:29:46.505 clat (msec): min=66, max=9167, avg=7114.53, stdev=2209.20 00:29:46.505 lat (msec): min=1759, max=9169, avg=7187.72, stdev=2111.49 00:29:46.505 clat percentiles (msec): 00:29:46.505 | 1.00th=[ 1754], 5.00th=[ 3104], 10.00th=[ 3406], 20.00th=[ 4279], 00:29:46.505 | 30.00th=[ 7886], 40.00th=[ 8020], 50.00th=[ 8154], 60.00th=[ 8288], 00:29:46.505 | 70.00th=[ 8423], 80.00th=[ 8557], 90.00th=[ 8926], 95.00th=[ 9060], 00:29:46.505 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:29:46.505 | 99.99th=[ 9194] 00:29:46.505 bw ( KiB/s): min= 2048, max=38834, per=0.39%, avg=11581.33, stdev=13598.07, samples=6 00:29:46.505 iops : min= 2, max= 37, avg=11.00, stdev=12.96, samples=6 00:29:46.505 lat (msec) : 100=0.62%, 2000=1.23%, >=2000=98.15% 00:29:46.505 cpu : usr=0.03%, sys=0.46%, ctx=526, majf=0, minf=32769 00:29:46.505 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.9%, 32=19.8%, >=64=61.1% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=97.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.8% 00:29:46.505 issued rwts: total=162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.505 job3: (groupid=0, jobs=1): err= 0: pid=1835459: Thu Dec 5 14:00:45 2024 00:29:46.505 read: IOPS=10, BW=10.5MiB/s (11.0MB/s)(148MiB/14094msec) 00:29:46.505 slat (usec): min=348, max=2145.3k, avg=80787.93, stdev=363023.45 00:29:46.505 clat (msec): min=2136, max=14061, avg=8476.64, stdev=2106.40 00:29:46.505 lat (msec): min=4197, max=14062, avg=8557.42, stdev=2090.15 00:29:46.505 clat percentiles (msec): 00:29:46.505 | 1.00th=[ 4212], 5.00th=[ 6477], 10.00th=[ 6544], 20.00th=[ 7617], 00:29:46.505 | 30.00th=[ 7752], 40.00th=[ 7953], 50.00th=[ 8154], 60.00th=[ 8288], 00:29:46.505 | 70.00th=[ 8423], 80.00th=[ 8490], 90.00th=[12818], 95.00th=[14026], 00:29:46.505 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:29:46.505 | 99.99th=[14026] 00:29:46.505 bw ( KiB/s): min= 2048, max=18432, per=0.29%, avg=8602.40, stdev=7439.87, samples=5 00:29:46.505 iops : min= 2, max= 18, avg= 8.40, stdev= 7.27, samples=5 00:29:46.505 lat (msec) : >=2000=100.00% 00:29:46.505 cpu : usr=0.00%, sys=0.49%, ctx=223, majf=0, minf=32769 00:29:46.505 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.4%, 16=10.8%, 32=21.6%, >=64=57.4% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=95.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.5% 00:29:46.505 issued rwts: total=148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.505 job3: (groupid=0, jobs=1): err= 0: pid=1835460: Thu Dec 5 14:00:45 2024 00:29:46.505 read: IOPS=86, BW=86.0MiB/s (90.2MB/s)(869MiB/10103msec) 00:29:46.505 slat (usec): min=39, max=2041.7k, avg=11553.35, stdev=89267.83 00:29:46.505 clat (msec): min=59, max=3200, avg=1234.38, stdev=817.70 00:29:46.505 lat (msec): min=131, max=3219, avg=1245.94, stdev=820.09 00:29:46.505 clat percentiles (msec): 00:29:46.505 | 1.00th=[ 167], 5.00th=[ 617], 10.00th=[ 625], 20.00th=[ 667], 00:29:46.505 | 30.00th=[ 718], 40.00th=[ 776], 50.00th=[ 810], 60.00th=[ 1011], 00:29:46.505 | 70.00th=[ 1284], 80.00th=[ 2366], 90.00th=[ 2735], 95.00th=[ 3004], 00:29:46.505 | 99.00th=[ 3171], 99.50th=[ 3205], 99.90th=[ 3205], 99.95th=[ 3205], 00:29:46.505 | 99.99th=[ 3205] 00:29:46.505 bw ( KiB/s): min=35963, max=208896, per=4.23%, avg=126382.17, stdev=58605.71, samples=12 00:29:46.505 iops : min= 35, max= 204, avg=123.25, stdev=57.26, samples=12 00:29:46.505 lat (msec) : 100=0.12%, 250=1.04%, 500=1.04%, 750=33.26%, 1000=24.28% 00:29:46.505 lat (msec) : 2000=19.68%, >=2000=20.60% 00:29:46.505 cpu : usr=0.05%, sys=1.14%, ctx=893, majf=0, minf=32769 00:29:46.505 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.505 issued rwts: total=869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.505 job3: (groupid=0, jobs=1): err= 0: pid=1835461: Thu Dec 5 14:00:45 2024 00:29:46.505 read: IOPS=5, BW=5864KiB/s (6005kB/s)(58.0MiB/10128msec) 00:29:46.505 slat (usec): min=374, max=2108.8k, avg=173614.88, stdev=542773.91 00:29:46.505 clat (msec): min=57, max=10116, avg=4675.09, stdev=4000.93 00:29:46.505 lat (msec): min=128, max=10127, avg=4848.70, stdev=4015.51 00:29:46.505 clat percentiles (msec): 00:29:46.505 | 1.00th=[ 58], 5.00th=[ 134], 10.00th=[ 150], 20.00th=[ 279], 00:29:46.505 | 30.00th=[ 2333], 40.00th=[ 2467], 50.00th=[ 2467], 60.00th=[ 4597], 00:29:46.505 | 70.00th=[ 8926], 80.00th=[10000], 90.00th=[10000], 95.00th=[10134], 00:29:46.505 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:29:46.505 | 99.99th=[10134] 00:29:46.505 lat (msec) : 100=1.72%, 250=12.07%, 500=10.34%, >=2000=75.86% 00:29:46.505 cpu : usr=0.00%, sys=0.30%, ctx=98, majf=0, minf=14849 00:29:46.505 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:46.505 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.505 job3: (groupid=0, jobs=1): err= 0: pid=1835462: Thu Dec 5 14:00:45 2024 00:29:46.505 read: IOPS=15, BW=15.7MiB/s (16.4MB/s)(157MiB/10023msec) 00:29:46.505 slat (usec): min=364, max=2112.4k, avg=63698.53, stdev=310097.75 00:29:46.505 clat (msec): min=21, max=9885, avg=3246.80, stdev=3857.03 00:29:46.505 lat (msec): min=23, max=9922, avg=3310.50, stdev=3885.72 00:29:46.505 clat percentiles (msec): 00:29:46.505 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 63], 20.00th=[ 104], 00:29:46.505 | 30.00th=[ 155], 40.00th=[ 197], 50.00th=[ 241], 60.00th=[ 3473], 00:29:46.505 | 70.00th=[ 7349], 80.00th=[ 7483], 90.00th=[ 9463], 95.00th=[ 9731], 00:29:46.505 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:29:46.505 | 99.99th=[ 9866] 00:29:46.505 lat (msec) : 50=7.64%, 100=10.83%, 250=38.22%, 500=1.27%, >=2000=42.04% 00:29:46.505 cpu : usr=0.01%, sys=0.78%, ctx=236, majf=0, minf=32769 00:29:46.505 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.1%, 16=10.2%, 32=20.4%, >=64=59.9% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=96.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.2% 00:29:46.505 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.505 job3: (groupid=0, jobs=1): err= 0: pid=1835463: Thu Dec 5 14:00:45 2024 00:29:46.505 read: IOPS=13, BW=13.3MiB/s (14.0MB/s)(188MiB/14092msec) 00:29:46.505 slat (usec): min=76, max=3202.0k, avg=63585.56, stdev=339739.98 00:29:46.505 clat (msec): min=926, max=12550, avg=8644.39, stdev=4663.38 00:29:46.505 lat (msec): min=931, max=12550, avg=8707.97, stdev=4635.43 00:29:46.505 clat percentiles (msec): 00:29:46.505 | 1.00th=[ 927], 5.00th=[ 936], 10.00th=[ 1003], 20.00th=[ 2106], 00:29:46.505 | 30.00th=[ 6275], 40.00th=[11745], 50.00th=[11879], 60.00th=[12013], 00:29:46.505 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:29:46.505 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:29:46.505 | 99.99th=[12550] 00:29:46.505 bw ( KiB/s): min= 2048, max=100352, per=0.70%, avg=20822.00, stdev=39124.81, samples=6 00:29:46.505 iops : min= 2, max= 98, avg=20.33, stdev=38.21, samples=6 00:29:46.505 lat (msec) : 1000=9.04%, 2000=1.60%, >=2000=89.36% 00:29:46.505 cpu : usr=0.00%, sys=0.55%, ctx=201, majf=0, minf=32769 00:29:46.505 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.5%, 32=17.0%, >=64=66.5% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:29:46.505 issued rwts: total=188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.505 job3: (groupid=0, jobs=1): err= 0: pid=1835464: Thu Dec 5 14:00:45 2024 00:29:46.505 read: IOPS=13, BW=13.9MiB/s (14.5MB/s)(194MiB/13998msec) 00:29:46.505 slat (usec): min=93, max=2094.9k, avg=61132.64, stdev=314794.81 00:29:46.506 clat (msec): min=858, max=11632, avg=8073.74, stdev=4070.28 00:29:46.506 lat (msec): min=862, max=11637, avg=8134.87, stdev=4037.62 00:29:46.506 clat percentiles (msec): 00:29:46.506 | 1.00th=[ 860], 5.00th=[ 2400], 10.00th=[ 2433], 20.00th=[ 2567], 00:29:46.506 | 30.00th=[ 3205], 40.00th=[ 9463], 50.00th=[10939], 60.00th=[11073], 00:29:46.506 | 70.00th=[11208], 80.00th=[11342], 90.00th=[11476], 95.00th=[11610], 00:29:46.506 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:29:46.506 | 99.99th=[11610] 00:29:46.506 bw ( KiB/s): min= 2048, max=67584, per=0.66%, avg=19595.86, stdev=26545.06, samples=7 00:29:46.506 iops : min= 2, max= 66, avg=19.00, stdev=26.01, samples=7 00:29:46.506 lat (msec) : 1000=1.55%, 2000=1.03%, >=2000=97.42% 00:29:46.506 cpu : usr=0.01%, sys=0.67%, ctx=232, majf=0, minf=32769 00:29:46.506 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.1%, 16=8.2%, 32=16.5%, >=64=67.5% 00:29:46.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.506 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:29:46.506 issued rwts: total=194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.506 job3: (groupid=0, jobs=1): err= 0: pid=1835465: Thu Dec 5 14:00:45 2024 00:29:46.506 read: IOPS=20, BW=20.9MiB/s (21.9MB/s)(251MiB/11996msec) 00:29:46.506 slat (usec): min=67, max=2136.1k, avg=47528.83, stdev=276399.06 00:29:46.506 clat (msec): min=64, max=10927, avg=5763.70, stdev=4631.06 00:29:46.506 lat (msec): min=816, max=10932, avg=5811.23, stdev=4624.25 00:29:46.506 clat percentiles (msec): 00:29:46.506 | 1.00th=[ 818], 5.00th=[ 844], 10.00th=[ 860], 20.00th=[ 885], 00:29:46.506 | 30.00th=[ 953], 40.00th=[ 1083], 50.00th=[ 6678], 60.00th=[10268], 00:29:46.506 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10805], 95.00th=[10805], 00:29:46.506 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:29:46.506 | 99.99th=[10939] 00:29:46.506 bw ( KiB/s): min= 2048, max=126976, per=1.40%, avg=41927.83, stdev=54267.16, samples=6 00:29:46.506 iops : min= 2, max= 124, avg=40.83, stdev=53.08, samples=6 00:29:46.506 lat (msec) : 100=0.40%, 1000=35.06%, 2000=7.97%, >=2000=56.57% 00:29:46.506 cpu : usr=0.01%, sys=0.77%, ctx=311, majf=0, minf=32769 00:29:46.506 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.7%, >=64=74.9% 00:29:46.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.506 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:29:46.506 issued rwts: total=251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.506 job3: (groupid=0, jobs=1): err= 0: pid=1835466: Thu Dec 5 14:00:45 2024 00:29:46.506 read: IOPS=13, BW=13.5MiB/s (14.2MB/s)(147MiB/10857msec) 00:29:46.506 slat (usec): min=350, max=2108.7k, avg=73507.01, stdev=347070.62 00:29:46.506 clat (msec): min=50, max=10844, avg=7437.18, stdev=2459.90 00:29:46.506 lat (msec): min=2075, max=10844, avg=7510.69, stdev=2396.80 00:29:46.506 clat percentiles (msec): 00:29:46.506 | 1.00th=[ 2072], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 6409], 00:29:46.506 | 30.00th=[ 6611], 40.00th=[ 8154], 50.00th=[ 8288], 60.00th=[ 8356], 00:29:46.506 | 70.00th=[ 8490], 80.00th=[ 8557], 90.00th=[10402], 95.00th=[10805], 00:29:46.506 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:29:46.506 | 99.99th=[10805] 00:29:46.506 bw ( KiB/s): min=10240, max=28672, per=0.65%, avg=19456.00, stdev=13033.39, samples=2 00:29:46.506 iops : min= 10, max= 28, avg=19.00, stdev=12.73, samples=2 00:29:46.506 lat (msec) : 100=0.68%, >=2000=99.32% 00:29:46.506 cpu : usr=0.00%, sys=0.64%, ctx=185, majf=0, minf=32769 00:29:46.506 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.4%, 16=10.9%, 32=21.8%, >=64=57.1% 00:29:46.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.506 complete : 0=0.0%, 4=95.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.8% 00:29:46.506 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.506 job3: (groupid=0, jobs=1): err= 0: pid=1835467: Thu Dec 5 14:00:45 2024 00:29:46.506 read: IOPS=3, BW=3945KiB/s (4040kB/s)(46.0MiB/11939msec) 00:29:46.506 slat (usec): min=485, max=2094.8k, avg=258086.27, stdev=660479.50 00:29:46.506 clat (msec): min=66, max=11931, avg=8969.66, stdev=3565.53 00:29:46.506 lat (msec): min=2111, max=11938, avg=9227.75, stdev=3328.53 00:29:46.506 clat percentiles (msec): 00:29:46.506 | 1.00th=[ 67], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 6409], 00:29:46.506 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[11745], 00:29:46.506 | 70.00th=[11879], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:29:46.506 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:29:46.506 | 99.99th=[11879] 00:29:46.506 lat (msec) : 100=2.17%, >=2000=97.83% 00:29:46.506 cpu : usr=0.01%, sys=0.22%, ctx=65, majf=0, minf=11777 00:29:46.506 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:29:46.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.506 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:46.506 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.506 job4: (groupid=0, jobs=1): err= 0: pid=1835468: Thu Dec 5 14:00:45 2024 00:29:46.506 read: IOPS=6, BW=6245KiB/s (6395kB/s)(73.0MiB/11970msec) 00:29:46.506 slat (usec): min=497, max=4180.2k, avg=163000.65, stdev=634907.41 00:29:46.506 clat (msec): min=69, max=11957, avg=10673.04, stdev=1922.41 00:29:46.506 lat (msec): min=4249, max=11968, avg=10836.04, stdev=1459.63 00:29:46.506 clat percentiles (msec): 00:29:46.506 | 1.00th=[ 70], 5.00th=[ 6409], 10.00th=[10671], 20.00th=[10805], 00:29:46.506 | 30.00th=[10939], 40.00th=[10939], 50.00th=[11073], 60.00th=[11208], 00:29:46.506 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11610], 95.00th=[11879], 00:29:46.506 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:29:46.506 | 99.99th=[12013] 00:29:46.506 lat (msec) : 100=1.37%, >=2000=98.63% 00:29:46.506 cpu : usr=0.00%, sys=0.31%, ctx=247, majf=0, minf=18689 00:29:46.506 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:29:46.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.506 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:46.506 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.506 job4: (groupid=0, jobs=1): err= 0: pid=1835469: Thu Dec 5 14:00:45 2024 00:29:46.506 read: IOPS=25, BW=25.1MiB/s (26.3MB/s)(300MiB/11971msec) 00:29:46.506 slat (usec): min=108, max=2122.6k, avg=39666.79, stdev=247172.64 00:29:46.506 clat (msec): min=69, max=10314, avg=4743.18, stdev=4347.34 00:29:46.506 lat (msec): min=650, max=10315, avg=4782.85, stdev=4346.32 00:29:46.506 clat percentiles (msec): 00:29:46.506 | 1.00th=[ 651], 5.00th=[ 651], 10.00th=[ 667], 20.00th=[ 726], 00:29:46.506 | 30.00th=[ 835], 40.00th=[ 1234], 50.00th=[ 1435], 60.00th=[ 8154], 00:29:46.506 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10268], 00:29:46.506 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:29:46.506 | 99.99th=[10268] 00:29:46.506 bw ( KiB/s): min= 2048, max=202752, per=1.47%, avg=44000.25, stdev=69990.81, samples=8 00:29:46.506 iops : min= 2, max= 198, avg=42.88, stdev=68.40, samples=8 00:29:46.506 lat (msec) : 100=0.33%, 750=22.67%, 1000=12.00%, 2000=19.00%, >=2000=46.00% 00:29:46.506 cpu : usr=0.02%, sys=0.58%, ctx=398, majf=0, minf=32769 00:29:46.506 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.3%, 32=10.7%, >=64=79.0% 00:29:46.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.506 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:29:46.506 issued rwts: total=300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.506 job4: (groupid=0, jobs=1): err= 0: pid=1835470: Thu Dec 5 14:00:45 2024 00:29:46.506 read: IOPS=90, BW=90.5MiB/s (94.9MB/s)(915MiB/10106msec) 00:29:46.506 slat (usec): min=42, max=1829.0k, avg=10950.63, stdev=63761.85 00:29:46.506 clat (msec): min=83, max=4700, avg=1353.73, stdev=842.49 00:29:46.506 lat (msec): min=129, max=4707, avg=1364.68, stdev=846.04 00:29:46.506 clat percentiles (msec): 00:29:46.506 | 1.00th=[ 305], 5.00th=[ 625], 10.00th=[ 693], 20.00th=[ 902], 00:29:46.506 | 30.00th=[ 961], 40.00th=[ 995], 50.00th=[ 1062], 60.00th=[ 1133], 00:29:46.506 | 70.00th=[ 1217], 80.00th=[ 1485], 90.00th=[ 3205], 95.00th=[ 3306], 00:29:46.506 | 99.00th=[ 3406], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:29:46.506 | 99.99th=[ 4732] 00:29:46.506 bw ( KiB/s): min=34816, max=210944, per=3.37%, avg=100751.63, stdev=40338.62, samples=16 00:29:46.506 iops : min= 34, max= 206, avg=98.38, stdev=39.38, samples=16 00:29:46.506 lat (msec) : 100=0.11%, 250=0.66%, 500=1.53%, 750=9.40%, 1000=29.84% 00:29:46.506 lat (msec) : 2000=44.59%, >=2000=13.88% 00:29:46.506 cpu : usr=0.01%, sys=1.52%, ctx=1231, majf=0, minf=32770 00:29:46.506 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:29:46.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.506 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.506 issued rwts: total=915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.506 job4: (groupid=0, jobs=1): err= 0: pid=1835471: Thu Dec 5 14:00:45 2024 00:29:46.506 read: IOPS=51, BW=51.8MiB/s (54.3MB/s)(521MiB/10053msec) 00:29:46.506 slat (usec): min=40, max=2070.1k, avg=19196.75, stdev=134591.45 00:29:46.506 clat (msec): min=49, max=6822, avg=1127.25, stdev=972.24 00:29:46.506 lat (msec): min=60, max=6828, avg=1146.44, stdev=1002.93 00:29:46.506 clat percentiles (msec): 00:29:46.506 | 1.00th=[ 64], 5.00th=[ 363], 10.00th=[ 684], 20.00th=[ 894], 00:29:46.506 | 30.00th=[ 919], 40.00th=[ 969], 50.00th=[ 986], 60.00th=[ 1003], 00:29:46.506 | 70.00th=[ 1070], 80.00th=[ 1099], 90.00th=[ 1200], 95.00th=[ 1418], 00:29:46.506 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6812], 99.95th=[ 6812], 00:29:46.506 | 99.99th=[ 6812] 00:29:46.506 bw ( KiB/s): min=63488, max=155648, per=4.02%, avg=120149.33, stdev=31265.84, samples=6 00:29:46.506 iops : min= 62, max= 152, avg=117.33, stdev=30.53, samples=6 00:29:46.506 lat (msec) : 50=0.19%, 100=1.54%, 250=1.92%, 500=4.41%, 750=4.22% 00:29:46.506 lat (msec) : 1000=44.72%, 2000=38.96%, >=2000=4.03% 00:29:46.506 cpu : usr=0.01%, sys=0.87%, ctx=670, majf=0, minf=32769 00:29:46.506 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=87.9% 00:29:46.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.506 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.507 issued rwts: total=521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.507 job4: (groupid=0, jobs=1): err= 0: pid=1835472: Thu Dec 5 14:00:45 2024 00:29:46.507 read: IOPS=24, BW=24.6MiB/s (25.8MB/s)(294MiB/11952msec) 00:29:46.507 slat (usec): min=96, max=2124.9k, avg=34022.61, stdev=193241.55 00:29:46.507 clat (msec): min=1545, max=6569, avg=3680.39, stdev=1782.02 00:29:46.507 lat (msec): min=1559, max=6573, avg=3714.41, stdev=1781.87 00:29:46.507 clat percentiles (msec): 00:29:46.507 | 1.00th=[ 1552], 5.00th=[ 1569], 10.00th=[ 1569], 20.00th=[ 1586], 00:29:46.507 | 30.00th=[ 1636], 40.00th=[ 3104], 50.00th=[ 3943], 60.00th=[ 4329], 00:29:46.507 | 70.00th=[ 4665], 80.00th=[ 5000], 90.00th=[ 6477], 95.00th=[ 6477], 00:29:46.507 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:29:46.507 | 99.99th=[ 6544] 00:29:46.507 bw ( KiB/s): min=24576, max=86016, per=1.91%, avg=56967.33, stdev=28376.64, samples=6 00:29:46.507 iops : min= 24, max= 84, avg=55.50, stdev=27.88, samples=6 00:29:46.507 lat (msec) : 2000=32.99%, >=2000=67.01% 00:29:46.507 cpu : usr=0.00%, sys=0.69%, ctx=569, majf=0, minf=32769 00:29:46.507 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.9%, >=64=78.6% 00:29:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.507 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:29:46.507 issued rwts: total=294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.507 job4: (groupid=0, jobs=1): err= 0: pid=1835473: Thu Dec 5 14:00:45 2024 00:29:46.507 read: IOPS=11, BW=11.2MiB/s (11.7MB/s)(134MiB/11991msec) 00:29:46.507 slat (usec): min=734, max=2122.0k, avg=88935.19, stdev=395051.58 00:29:46.507 clat (msec): min=72, max=11871, avg=10377.89, stdev=2111.85 00:29:46.507 lat (msec): min=2157, max=11896, avg=10466.83, stdev=1915.48 00:29:46.507 clat percentiles (msec): 00:29:46.507 | 1.00th=[ 2165], 5.00th=[ 5604], 10.00th=[ 6477], 20.00th=[10805], 00:29:46.507 | 30.00th=[10939], 40.00th=[11073], 50.00th=[11073], 60.00th=[11208], 00:29:46.507 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11610], 95.00th=[11745], 00:29:46.507 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:29:46.507 | 99.99th=[11879] 00:29:46.507 bw ( KiB/s): min= 3992, max= 4096, per=0.14%, avg=4061.33, stdev=60.04, samples=3 00:29:46.507 iops : min= 3, max= 4, avg= 3.67, stdev= 0.58, samples=3 00:29:46.507 lat (msec) : 100=0.75%, >=2000=99.25% 00:29:46.507 cpu : usr=0.00%, sys=0.80%, ctx=244, majf=0, minf=32769 00:29:46.507 IO depths : 1=0.7%, 2=1.5%, 4=3.0%, 8=6.0%, 16=11.9%, 32=23.9%, >=64=53.0% 00:29:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.507 complete : 0=0.0%, 4=87.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=12.5% 00:29:46.507 issued rwts: total=134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.507 job4: (groupid=0, jobs=1): err= 0: pid=1835474: Thu Dec 5 14:00:45 2024 00:29:46.507 read: IOPS=9, BW=9356KiB/s (9580kB/s)(109MiB/11930msec) 00:29:46.507 slat (usec): min=410, max=2123.0k, avg=108814.65, stdev=433773.08 00:29:46.507 clat (msec): min=68, max=11862, avg=10001.00, stdev=2781.95 00:29:46.507 lat (msec): min=2104, max=11929, avg=10109.81, stdev=2616.92 00:29:46.507 clat percentiles (msec): 00:29:46.507 | 1.00th=[ 2106], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 8658], 00:29:46.507 | 30.00th=[10939], 40.00th=[10939], 50.00th=[11073], 60.00th=[11208], 00:29:46.507 | 70.00th=[11342], 80.00th=[11610], 90.00th=[11745], 95.00th=[11879], 00:29:46.507 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:29:46.507 | 99.99th=[11879] 00:29:46.507 lat (msec) : 100=0.92%, >=2000=99.08% 00:29:46.507 cpu : usr=0.01%, sys=0.41%, ctx=234, majf=0, minf=27905 00:29:46.507 IO depths : 1=0.9%, 2=1.8%, 4=3.7%, 8=7.3%, 16=14.7%, 32=29.4%, >=64=42.2% 00:29:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.507 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:46.507 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.507 job4: (groupid=0, jobs=1): err= 0: pid=1835475: Thu Dec 5 14:00:45 2024 00:29:46.507 read: IOPS=40, BW=40.5MiB/s (42.5MB/s)(408MiB/10067msec) 00:29:46.507 slat (usec): min=46, max=2115.1k, avg=24513.65, stdev=137490.27 00:29:46.507 clat (msec): min=63, max=7592, avg=1525.00, stdev=1619.13 00:29:46.507 lat (msec): min=96, max=7618, avg=1549.52, stdev=1647.08 00:29:46.507 clat percentiles (msec): 00:29:46.507 | 1.00th=[ 133], 5.00th=[ 342], 10.00th=[ 481], 20.00th=[ 768], 00:29:46.507 | 30.00th=[ 902], 40.00th=[ 927], 50.00th=[ 986], 60.00th=[ 1083], 00:29:46.507 | 70.00th=[ 1385], 80.00th=[ 1871], 90.00th=[ 2366], 95.00th=[ 6611], 00:29:46.507 | 99.00th=[ 7617], 99.50th=[ 7617], 99.90th=[ 7617], 99.95th=[ 7617], 00:29:46.507 | 99.99th=[ 7617] 00:29:46.507 bw ( KiB/s): min=14336, max=157696, per=3.20%, avg=95664.83, stdev=50267.89, samples=6 00:29:46.507 iops : min= 14, max= 154, avg=93.33, stdev=49.07, samples=6 00:29:46.507 lat (msec) : 100=0.49%, 250=2.45%, 500=8.33%, 750=7.60%, 1000=32.60% 00:29:46.507 lat (msec) : 2000=32.60%, >=2000=15.93% 00:29:46.507 cpu : usr=0.02%, sys=0.92%, ctx=677, majf=0, minf=32769 00:29:46.507 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.8%, >=64=84.6% 00:29:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.507 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:46.507 issued rwts: total=408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.507 job4: (groupid=0, jobs=1): err= 0: pid=1835476: Thu Dec 5 14:00:45 2024 00:29:46.507 read: IOPS=10, BW=10.1MiB/s (10.6MB/s)(122MiB/12026msec) 00:29:46.507 slat (usec): min=360, max=2120.3k, avg=97979.08, stdev=412221.32 00:29:46.507 clat (msec): min=72, max=12025, avg=10292.41, stdev=2641.68 00:29:46.507 lat (msec): min=2140, max=12025, avg=10390.39, stdev=2475.97 00:29:46.507 clat percentiles (msec): 00:29:46.507 | 1.00th=[ 2140], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 8658], 00:29:46.507 | 30.00th=[10939], 40.00th=[10939], 50.00th=[11208], 60.00th=[11476], 00:29:46.507 | 70.00th=[11745], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:29:46.507 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:29:46.507 | 99.99th=[12013] 00:29:46.507 lat (msec) : 100=0.82%, >=2000=99.18% 00:29:46.507 cpu : usr=0.00%, sys=0.65%, ctx=261, majf=0, minf=31233 00:29:46.507 IO depths : 1=0.8%, 2=1.6%, 4=3.3%, 8=6.6%, 16=13.1%, 32=26.2%, >=64=48.4% 00:29:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.507 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:46.507 issued rwts: total=122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.507 job4: (groupid=0, jobs=1): err= 0: pid=1835477: Thu Dec 5 14:00:45 2024 00:29:46.507 read: IOPS=141, BW=142MiB/s (148MB/s)(1427MiB/10079msec) 00:29:46.507 slat (usec): min=29, max=1072.1k, avg=7008.52, stdev=38615.39 00:29:46.507 clat (msec): min=73, max=3611, avg=766.05, stdev=590.48 00:29:46.507 lat (msec): min=111, max=3613, avg=773.06, stdev=595.16 00:29:46.507 clat percentiles (msec): 00:29:46.507 | 1.00th=[ 192], 5.00th=[ 376], 10.00th=[ 397], 20.00th=[ 426], 00:29:46.507 | 30.00th=[ 443], 40.00th=[ 468], 50.00th=[ 498], 60.00th=[ 531], 00:29:46.507 | 70.00th=[ 659], 80.00th=[ 1099], 90.00th=[ 1670], 95.00th=[ 2366], 00:29:46.507 | 99.00th=[ 2567], 99.50th=[ 2601], 99.90th=[ 3574], 99.95th=[ 3608], 00:29:46.507 | 99.99th=[ 3608] 00:29:46.507 bw ( KiB/s): min= 8192, max=307200, per=5.56%, avg=166166.50, stdev=107658.16, samples=16 00:29:46.507 iops : min= 8, max= 300, avg=162.19, stdev=105.07, samples=16 00:29:46.507 lat (msec) : 100=0.07%, 250=1.54%, 500=48.84%, 750=23.76%, 1000=4.34% 00:29:46.507 lat (msec) : 2000=14.02%, >=2000=7.43% 00:29:46.507 cpu : usr=0.04%, sys=1.32%, ctx=1615, majf=0, minf=32769 00:29:46.507 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:29:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.507 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.507 issued rwts: total=1427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.507 job4: (groupid=0, jobs=1): err= 0: pid=1835478: Thu Dec 5 14:00:45 2024 00:29:46.507 read: IOPS=121, BW=122MiB/s (128MB/s)(1229MiB/10094msec) 00:29:46.507 slat (usec): min=26, max=1727.7k, avg=8147.36, stdev=52320.80 00:29:46.507 clat (msec): min=77, max=2716, avg=918.13, stdev=609.88 00:29:46.507 lat (msec): min=96, max=2718, avg=926.27, stdev=612.76 00:29:46.507 clat percentiles (msec): 00:29:46.507 | 1.00th=[ 192], 5.00th=[ 514], 10.00th=[ 558], 20.00th=[ 575], 00:29:46.507 | 30.00th=[ 609], 40.00th=[ 642], 50.00th=[ 684], 60.00th=[ 751], 00:29:46.507 | 70.00th=[ 885], 80.00th=[ 995], 90.00th=[ 2433], 95.00th=[ 2601], 00:29:46.507 | 99.00th=[ 2635], 99.50th=[ 2702], 99.90th=[ 2702], 99.95th=[ 2702], 00:29:46.507 | 99.99th=[ 2702] 00:29:46.507 bw ( KiB/s): min=14336, max=239616, per=5.01%, avg=149819.13, stdev=73889.83, samples=15 00:29:46.507 iops : min= 14, max= 234, avg=146.27, stdev=72.16, samples=15 00:29:46.507 lat (msec) : 100=0.16%, 250=1.22%, 500=3.25%, 750=55.49%, 1000=21.48% 00:29:46.507 lat (msec) : 2000=8.06%, >=2000=10.33% 00:29:46.507 cpu : usr=0.01%, sys=1.22%, ctx=1229, majf=0, minf=32769 00:29:46.507 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:29:46.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.507 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.507 issued rwts: total=1229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.507 job4: (groupid=0, jobs=1): err= 0: pid=1835479: Thu Dec 5 14:00:45 2024 00:29:46.507 read: IOPS=7, BW=7810KiB/s (7997kB/s)(91.0MiB/11932msec) 00:29:46.507 slat (usec): min=507, max=2121.9k, avg=130317.55, stdev=445343.57 00:29:46.507 clat (msec): min=72, max=11915, avg=7553.17, stdev=3353.88 00:29:46.507 lat (msec): min=2100, max=11931, avg=7683.48, stdev=3289.75 00:29:46.507 clat percentiles (msec): 00:29:46.507 | 1.00th=[ 72], 5.00th=[ 2123], 10.00th=[ 2198], 20.00th=[ 5671], 00:29:46.507 | 30.00th=[ 5873], 40.00th=[ 6141], 50.00th=[ 6342], 60.00th=[ 8557], 00:29:46.507 | 70.00th=[11342], 80.00th=[11610], 90.00th=[11610], 95.00th=[11745], 00:29:46.507 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:29:46.507 | 99.99th=[11879] 00:29:46.507 lat (msec) : 100=1.10%, >=2000=98.90% 00:29:46.507 cpu : usr=0.01%, sys=0.51%, ctx=268, majf=0, minf=23297 00:29:46.507 IO depths : 1=1.1%, 2=2.2%, 4=4.4%, 8=8.8%, 16=17.6%, 32=35.2%, >=64=30.8% 00:29:46.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:46.508 issued rwts: total=91,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.508 job4: (groupid=0, jobs=1): err= 0: pid=1835480: Thu Dec 5 14:00:45 2024 00:29:46.508 read: IOPS=12, BW=12.7MiB/s (13.3MB/s)(151MiB/11936msec) 00:29:46.508 slat (usec): min=338, max=2186.1k, avg=78586.45, stdev=380882.61 00:29:46.508 clat (msec): min=68, max=11640, avg=9526.61, stdev=3550.62 00:29:46.508 lat (msec): min=939, max=11642, avg=9605.19, stdev=3465.15 00:29:46.508 clat percentiles (msec): 00:29:46.508 | 1.00th=[ 936], 5.00th=[ 953], 10.00th=[ 961], 20.00th=[10805], 00:29:46.508 | 30.00th=[10939], 40.00th=[10939], 50.00th=[11073], 60.00th=[11073], 00:29:46.508 | 70.00th=[11208], 80.00th=[11342], 90.00th=[11476], 95.00th=[11610], 00:29:46.508 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:29:46.508 | 99.99th=[11610] 00:29:46.508 bw ( KiB/s): min= 1988, max=34746, per=0.26%, avg=7829.00, stdev=13212.40, samples=6 00:29:46.508 iops : min= 1, max= 33, avg= 7.33, stdev=12.61, samples=6 00:29:46.508 lat (msec) : 100=0.66%, 1000=10.60%, 2000=0.66%, >=2000=88.08% 00:29:46.508 cpu : usr=0.00%, sys=0.56%, ctx=158, majf=0, minf=32769 00:29:46.508 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.3%, 16=10.6%, 32=21.2%, >=64=58.3% 00:29:46.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.508 complete : 0=0.0%, 4=96.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.0% 00:29:46.508 issued rwts: total=151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.508 job5: (groupid=0, jobs=1): err= 0: pid=1835481: Thu Dec 5 14:00:45 2024 00:29:46.508 read: IOPS=49, BW=49.9MiB/s (52.3MB/s)(505MiB/10118msec) 00:29:46.508 slat (usec): min=61, max=2092.1k, avg=19808.76, stdev=135738.38 00:29:46.508 clat (msec): min=112, max=5362, avg=2320.56, stdev=1817.62 00:29:46.508 lat (msec): min=140, max=5369, avg=2340.37, stdev=1821.05 00:29:46.508 clat percentiles (msec): 00:29:46.508 | 1.00th=[ 150], 5.00th=[ 247], 10.00th=[ 376], 20.00th=[ 701], 00:29:46.508 | 30.00th=[ 751], 40.00th=[ 1267], 50.00th=[ 1519], 60.00th=[ 3071], 00:29:46.508 | 70.00th=[ 3239], 80.00th=[ 4799], 90.00th=[ 5201], 95.00th=[ 5336], 00:29:46.508 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:29:46.508 | 99.99th=[ 5336] 00:29:46.508 bw ( KiB/s): min=12288, max=194560, per=2.88%, avg=86016.00, stdev=65503.99, samples=9 00:29:46.508 iops : min= 12, max= 190, avg=84.00, stdev=63.97, samples=9 00:29:46.508 lat (msec) : 250=5.15%, 500=8.91%, 750=15.84%, 1000=9.31%, 2000=16.04% 00:29:46.508 lat (msec) : >=2000=44.75% 00:29:46.508 cpu : usr=0.02%, sys=1.13%, ctx=1161, majf=0, minf=32769 00:29:46.508 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:29:46.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.508 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.508 issued rwts: total=505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.508 job5: (groupid=0, jobs=1): err= 0: pid=1835482: Thu Dec 5 14:00:45 2024 00:29:46.508 read: IOPS=86, BW=86.3MiB/s (90.4MB/s)(1032MiB/11965msec) 00:29:46.508 slat (usec): min=41, max=2111.6k, avg=11518.69, stdev=82838.08 00:29:46.508 clat (msec): min=73, max=3004, avg=1397.91, stdev=711.38 00:29:46.508 lat (msec): min=572, max=3007, avg=1409.43, stdev=711.80 00:29:46.508 clat percentiles (msec): 00:29:46.508 | 1.00th=[ 592], 5.00th=[ 642], 10.00th=[ 693], 20.00th=[ 768], 00:29:46.508 | 30.00th=[ 860], 40.00th=[ 936], 50.00th=[ 1133], 60.00th=[ 1301], 00:29:46.508 | 70.00th=[ 1854], 80.00th=[ 2265], 90.00th=[ 2433], 95.00th=[ 2735], 00:29:46.508 | 99.00th=[ 2970], 99.50th=[ 2970], 99.90th=[ 3004], 99.95th=[ 3004], 00:29:46.508 | 99.99th=[ 3004] 00:29:46.508 bw ( KiB/s): min=34816, max=239616, per=3.87%, avg=115611.38, stdev=59923.42, samples=16 00:29:46.508 iops : min= 34, max= 234, avg=112.88, stdev=58.48, samples=16 00:29:46.508 lat (msec) : 100=0.10%, 750=17.34%, 1000=26.36%, 2000=31.88%, >=2000=24.32% 00:29:46.508 cpu : usr=0.03%, sys=1.21%, ctx=2045, majf=0, minf=32769 00:29:46.508 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:29:46.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.508 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.508 job5: (groupid=0, jobs=1): err= 0: pid=1835483: Thu Dec 5 14:00:45 2024 00:29:46.508 read: IOPS=78, BW=78.7MiB/s (82.5MB/s)(795MiB/10104msec) 00:29:46.508 slat (usec): min=29, max=95414, avg=12591.46, stdev=17042.18 00:29:46.508 clat (msec): min=89, max=2951, avg=1472.86, stdev=607.91 00:29:46.508 lat (msec): min=168, max=2976, avg=1485.46, stdev=609.87 00:29:46.508 clat percentiles (msec): 00:29:46.508 | 1.00th=[ 228], 5.00th=[ 600], 10.00th=[ 860], 20.00th=[ 1003], 00:29:46.508 | 30.00th=[ 1150], 40.00th=[ 1217], 50.00th=[ 1351], 60.00th=[ 1502], 00:29:46.508 | 70.00th=[ 1603], 80.00th=[ 1888], 90.00th=[ 2534], 95.00th=[ 2702], 00:29:46.508 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2937], 99.95th=[ 2937], 00:29:46.508 | 99.99th=[ 2937] 00:29:46.508 bw ( KiB/s): min=20480, max=165888, per=2.69%, avg=80304.76, stdev=41786.79, samples=17 00:29:46.508 iops : min= 20, max= 162, avg=78.35, stdev=40.78, samples=17 00:29:46.508 lat (msec) : 100=0.13%, 250=0.88%, 500=3.14%, 750=2.01%, 1000=12.83% 00:29:46.508 lat (msec) : 2000=62.39%, >=2000=18.62% 00:29:46.508 cpu : usr=0.04%, sys=1.29%, ctx=1882, majf=0, minf=32769 00:29:46.508 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:29:46.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.508 issued rwts: total=795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.508 job5: (groupid=0, jobs=1): err= 0: pid=1835484: Thu Dec 5 14:00:45 2024 00:29:46.508 read: IOPS=94, BW=94.7MiB/s (99.3MB/s)(955MiB/10087msec) 00:29:46.508 slat (usec): min=37, max=107644, avg=10496.13, stdev=17075.09 00:29:46.508 clat (msec): min=59, max=2000, avg=1240.74, stdev=395.55 00:29:46.508 lat (msec): min=126, max=2001, avg=1251.24, stdev=394.07 00:29:46.508 clat percentiles (msec): 00:29:46.508 | 1.00th=[ 330], 5.00th=[ 617], 10.00th=[ 709], 20.00th=[ 877], 00:29:46.508 | 30.00th=[ 1036], 40.00th=[ 1133], 50.00th=[ 1250], 60.00th=[ 1385], 00:29:46.508 | 70.00th=[ 1485], 80.00th=[ 1620], 90.00th=[ 1754], 95.00th=[ 1871], 00:29:46.508 | 99.00th=[ 1955], 99.50th=[ 1972], 99.90th=[ 2005], 99.95th=[ 2005], 00:29:46.508 | 99.99th=[ 2005] 00:29:46.508 bw ( KiB/s): min=20480, max=229376, per=3.33%, avg=99646.41, stdev=55864.16, samples=17 00:29:46.508 iops : min= 20, max= 224, avg=97.24, stdev=54.65, samples=17 00:29:46.508 lat (msec) : 100=0.10%, 250=0.73%, 500=2.41%, 750=8.80%, 1000=17.38% 00:29:46.508 lat (msec) : 2000=70.47%, >=2000=0.10% 00:29:46.508 cpu : usr=0.00%, sys=1.18%, ctx=1741, majf=0, minf=32769 00:29:46.508 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:29:46.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.508 issued rwts: total=955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.508 job5: (groupid=0, jobs=1): err= 0: pid=1835485: Thu Dec 5 14:00:45 2024 00:29:46.508 read: IOPS=91, BW=91.8MiB/s (96.2MB/s)(928MiB/10111msec) 00:29:46.508 slat (usec): min=62, max=109048, avg=10825.71, stdev=18076.22 00:29:46.508 clat (msec): min=60, max=3480, avg=1208.97, stdev=853.42 00:29:46.508 lat (msec): min=124, max=3483, avg=1219.80, stdev=858.19 00:29:46.508 clat percentiles (msec): 00:29:46.508 | 1.00th=[ 326], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 380], 00:29:46.508 | 30.00th=[ 575], 40.00th=[ 751], 50.00th=[ 919], 60.00th=[ 1200], 00:29:46.508 | 70.00th=[ 1552], 80.00th=[ 1955], 90.00th=[ 2635], 95.00th=[ 2937], 00:29:46.508 | 99.00th=[ 3373], 99.50th=[ 3473], 99.90th=[ 3473], 99.95th=[ 3473], 00:29:46.508 | 99.99th=[ 3473] 00:29:46.508 bw ( KiB/s): min=22528, max=362496, per=3.42%, avg=102361.44, stdev=102856.51, samples=16 00:29:46.508 iops : min= 22, max= 354, avg=99.87, stdev=100.47, samples=16 00:29:46.508 lat (msec) : 100=0.11%, 250=0.75%, 500=25.54%, 750=14.01%, 1000=12.82% 00:29:46.508 lat (msec) : 2000=27.80%, >=2000=18.97% 00:29:46.508 cpu : usr=0.03%, sys=1.31%, ctx=1890, majf=0, minf=32769 00:29:46.508 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:29:46.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.508 issued rwts: total=928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.508 job5: (groupid=0, jobs=1): err= 0: pid=1835486: Thu Dec 5 14:00:45 2024 00:29:46.508 read: IOPS=90, BW=90.0MiB/s (94.4MB/s)(908MiB/10084msec) 00:29:46.508 slat (usec): min=33, max=2134.4k, avg=11035.45, stdev=72048.57 00:29:46.508 clat (msec): min=60, max=4080, avg=1279.98, stdev=1002.06 00:29:46.508 lat (msec): min=125, max=4114, avg=1291.02, stdev=1007.09 00:29:46.508 clat percentiles (msec): 00:29:46.508 | 1.00th=[ 330], 5.00th=[ 468], 10.00th=[ 481], 20.00th=[ 502], 00:29:46.509 | 30.00th=[ 523], 40.00th=[ 827], 50.00th=[ 1070], 60.00th=[ 1133], 00:29:46.509 | 70.00th=[ 1250], 80.00th=[ 1620], 90.00th=[ 3272], 95.00th=[ 3641], 00:29:46.509 | 99.00th=[ 4010], 99.50th=[ 4044], 99.90th=[ 4077], 99.95th=[ 4077], 00:29:46.509 | 99.99th=[ 4077] 00:29:46.509 bw ( KiB/s): min=14336, max=281138, per=3.82%, avg=114187.79, stdev=82708.11, samples=14 00:29:46.509 iops : min= 14, max= 274, avg=111.36, stdev=80.79, samples=14 00:29:46.509 lat (msec) : 100=0.11%, 250=0.66%, 500=19.27%, 750=17.40%, 1000=8.70% 00:29:46.509 lat (msec) : 2000=39.54%, >=2000=14.32% 00:29:46.509 cpu : usr=0.03%, sys=1.17%, ctx=1847, majf=0, minf=32769 00:29:46.509 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:29:46.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.509 issued rwts: total=908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.509 job5: (groupid=0, jobs=1): err= 0: pid=1835487: Thu Dec 5 14:00:45 2024 00:29:46.509 read: IOPS=80, BW=80.0MiB/s (83.9MB/s)(804MiB/10045msec) 00:29:46.509 slat (usec): min=39, max=2096.7k, avg=12437.75, stdev=94598.00 00:29:46.509 clat (msec): min=41, max=4666, avg=1459.72, stdev=1379.15 00:29:46.509 lat (msec): min=61, max=4672, avg=1472.16, stdev=1383.05 00:29:46.509 clat percentiles (msec): 00:29:46.509 | 1.00th=[ 75], 5.00th=[ 207], 10.00th=[ 355], 20.00th=[ 477], 00:29:46.509 | 30.00th=[ 510], 40.00th=[ 927], 50.00th=[ 1028], 60.00th=[ 1385], 00:29:46.509 | 70.00th=[ 1452], 80.00th=[ 1519], 90.00th=[ 4463], 95.00th=[ 4597], 00:29:46.509 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:29:46.509 | 99.99th=[ 4665] 00:29:46.509 bw ( KiB/s): min=12288, max=272384, per=4.18%, avg=124982.55, stdev=77541.51, samples=11 00:29:46.509 iops : min= 12, max= 266, avg=122.00, stdev=75.66, samples=11 00:29:46.509 lat (msec) : 50=0.12%, 100=1.99%, 250=4.48%, 500=21.39%, 750=8.21% 00:29:46.509 lat (msec) : 1000=12.44%, 2000=34.83%, >=2000=16.54% 00:29:46.509 cpu : usr=0.06%, sys=1.11%, ctx=1966, majf=0, minf=32769 00:29:46.509 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:29:46.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.509 issued rwts: total=804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.509 job5: (groupid=0, jobs=1): err= 0: pid=1835488: Thu Dec 5 14:00:45 2024 00:29:46.509 read: IOPS=117, BW=117MiB/s (123MB/s)(1183MiB/10077msec) 00:29:46.509 slat (usec): min=473, max=105116, avg=8450.06, stdev=12068.13 00:29:46.509 clat (msec): min=76, max=2296, avg=980.82, stdev=543.51 00:29:46.509 lat (msec): min=79, max=2326, avg=989.27, stdev=546.89 00:29:46.509 clat percentiles (msec): 00:29:46.509 | 1.00th=[ 249], 5.00th=[ 279], 10.00th=[ 409], 20.00th=[ 575], 00:29:46.509 | 30.00th=[ 701], 40.00th=[ 760], 50.00th=[ 793], 60.00th=[ 844], 00:29:46.509 | 70.00th=[ 1099], 80.00th=[ 1469], 90.00th=[ 2022], 95.00th=[ 2123], 00:29:46.509 | 99.00th=[ 2198], 99.50th=[ 2232], 99.90th=[ 2299], 99.95th=[ 2299], 00:29:46.509 | 99.99th=[ 2299] 00:29:46.509 bw ( KiB/s): min=14336, max=403456, per=4.25%, avg=127108.94, stdev=90590.93, samples=17 00:29:46.509 iops : min= 14, max= 394, avg=124.06, stdev=88.47, samples=17 00:29:46.509 lat (msec) : 100=0.25%, 250=1.01%, 500=12.68%, 750=23.92%, 1000=28.83% 00:29:46.509 lat (msec) : 2000=22.65%, >=2000=10.65% 00:29:46.509 cpu : usr=0.01%, sys=1.53%, ctx=2704, majf=0, minf=32769 00:29:46.509 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:29:46.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.509 issued rwts: total=1183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.509 job5: (groupid=0, jobs=1): err= 0: pid=1835489: Thu Dec 5 14:00:45 2024 00:29:46.509 read: IOPS=144, BW=145MiB/s (152MB/s)(1451MiB/10014msec) 00:29:46.509 slat (usec): min=39, max=2134.4k, avg=6888.89, stdev=56607.46 00:29:46.509 clat (msec): min=13, max=3355, avg=810.34, stdev=811.42 00:29:46.509 lat (msec): min=14, max=3378, avg=817.23, stdev=815.45 00:29:46.509 clat percentiles (msec): 00:29:46.509 | 1.00th=[ 28], 5.00th=[ 87], 10.00th=[ 255], 20.00th=[ 313], 00:29:46.509 | 30.00th=[ 388], 40.00th=[ 439], 50.00th=[ 535], 60.00th=[ 575], 00:29:46.509 | 70.00th=[ 676], 80.00th=[ 810], 90.00th=[ 2165], 95.00th=[ 2836], 00:29:46.509 | 99.00th=[ 3306], 99.50th=[ 3306], 99.90th=[ 3306], 99.95th=[ 3339], 00:29:46.509 | 99.99th=[ 3339] 00:29:46.509 bw ( KiB/s): min= 8192, max=495616, per=6.98%, avg=208580.92, stdev=129796.05, samples=13 00:29:46.509 iops : min= 8, max= 484, avg=203.69, stdev=126.75, samples=13 00:29:46.509 lat (msec) : 20=0.41%, 50=2.07%, 100=3.65%, 250=1.65%, 500=34.60% 00:29:46.509 lat (msec) : 750=34.25%, 1000=4.76%, 2000=5.24%, >=2000=13.37% 00:29:46.509 cpu : usr=0.04%, sys=1.64%, ctx=2365, majf=0, minf=32769 00:29:46.509 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.7% 00:29:46.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.509 issued rwts: total=1451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.509 job5: (groupid=0, jobs=1): err= 0: pid=1835490: Thu Dec 5 14:00:45 2024 00:29:46.509 read: IOPS=46, BW=46.7MiB/s (49.0MB/s)(505MiB/10810msec) 00:29:46.509 slat (usec): min=28, max=2122.7k, avg=21304.46, stdev=146947.73 00:29:46.509 clat (msec): min=48, max=5451, avg=2515.05, stdev=1643.96 00:29:46.509 lat (msec): min=482, max=5455, avg=2536.35, stdev=1642.28 00:29:46.509 clat percentiles (msec): 00:29:46.509 | 1.00th=[ 493], 5.00th=[ 575], 10.00th=[ 1083], 20.00th=[ 1250], 00:29:46.509 | 30.00th=[ 1452], 40.00th=[ 1586], 50.00th=[ 2106], 60.00th=[ 2198], 00:29:46.509 | 70.00th=[ 2433], 80.00th=[ 5067], 90.00th=[ 5336], 95.00th=[ 5403], 00:29:46.509 | 99.00th=[ 5470], 99.50th=[ 5470], 99.90th=[ 5470], 99.95th=[ 5470], 00:29:46.509 | 99.99th=[ 5470] 00:29:46.509 bw ( KiB/s): min= 8192, max=268288, per=2.57%, avg=76862.20, stdev=81137.55, samples=10 00:29:46.509 iops : min= 8, max= 262, avg=75.00, stdev=79.29, samples=10 00:29:46.509 lat (msec) : 50=0.20%, 500=0.99%, 750=7.33%, 2000=40.40%, >=2000=51.09% 00:29:46.509 cpu : usr=0.03%, sys=0.89%, ctx=972, majf=0, minf=32769 00:29:46.509 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:29:46.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.509 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:46.509 issued rwts: total=505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.509 job5: (groupid=0, jobs=1): err= 0: pid=1835491: Thu Dec 5 14:00:45 2024 00:29:46.509 read: IOPS=203, BW=203MiB/s (213MB/s)(2044MiB/10045msec) 00:29:46.509 slat (usec): min=40, max=108724, avg=4890.11, stdev=8626.54 00:29:46.509 clat (msec): min=42, max=1641, avg=580.04, stdev=288.38 00:29:46.509 lat (msec): min=63, max=1644, avg=584.93, stdev=290.47 00:29:46.509 clat percentiles (msec): 00:29:46.509 | 1.00th=[ 192], 5.00th=[ 255], 10.00th=[ 257], 20.00th=[ 262], 00:29:46.509 | 30.00th=[ 347], 40.00th=[ 498], 50.00th=[ 592], 60.00th=[ 684], 00:29:46.509 | 70.00th=[ 735], 80.00th=[ 785], 90.00th=[ 818], 95.00th=[ 936], 00:29:46.509 | 99.00th=[ 1586], 99.50th=[ 1620], 99.90th=[ 1636], 99.95th=[ 1636], 00:29:46.509 | 99.99th=[ 1636] 00:29:46.509 bw ( KiB/s): min=114688, max=505856, per=7.72%, avg=230859.59, stdev=119110.16, samples=17 00:29:46.509 iops : min= 112, max= 494, avg=225.41, stdev=116.32, samples=17 00:29:46.509 lat (msec) : 50=0.05%, 100=0.29%, 250=2.25%, 500=37.72%, 750=31.90% 00:29:46.509 lat (msec) : 1000=22.90%, 2000=4.89% 00:29:46.509 cpu : usr=0.09%, sys=1.78%, ctx=3561, majf=0, minf=32769 00:29:46.509 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:29:46.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.509 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.509 job5: (groupid=0, jobs=1): err= 0: pid=1835492: Thu Dec 5 14:00:45 2024 00:29:46.509 read: IOPS=58, BW=59.0MiB/s (61.8MB/s)(596MiB/10109msec) 00:29:46.509 slat (usec): min=65, max=1594.9k, avg=16792.35, stdev=76934.08 00:29:46.509 clat (msec): min=98, max=3399, avg=1761.95, stdev=762.39 00:29:46.509 lat (msec): min=159, max=3411, avg=1778.74, stdev=763.23 00:29:46.509 clat percentiles (msec): 00:29:46.509 | 1.00th=[ 186], 5.00th=[ 506], 10.00th=[ 1099], 20.00th=[ 1284], 00:29:46.509 | 30.00th=[ 1385], 40.00th=[ 1485], 50.00th=[ 1502], 60.00th=[ 1586], 00:29:46.509 | 70.00th=[ 2022], 80.00th=[ 2601], 90.00th=[ 2869], 95.00th=[ 3339], 00:29:46.509 | 99.00th=[ 3373], 99.50th=[ 3406], 99.90th=[ 3406], 99.95th=[ 3406], 00:29:46.509 | 99.99th=[ 3406] 00:29:46.509 bw ( KiB/s): min=22528, max=126976, per=2.47%, avg=73885.54, stdev=34750.30, samples=13 00:29:46.509 iops : min= 22, max= 124, avg=72.15, stdev=33.94, samples=13 00:29:46.509 lat (msec) : 100=0.17%, 250=1.34%, 500=3.36%, 750=1.85%, 1000=1.68% 00:29:46.509 lat (msec) : 2000=61.41%, >=2000=30.20% 00:29:46.509 cpu : usr=0.00%, sys=1.03%, ctx=1148, majf=0, minf=32769 00:29:46.509 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:29:46.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.509 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:46.509 issued rwts: total=596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.509 job5: (groupid=0, jobs=1): err= 0: pid=1835493: Thu Dec 5 14:00:45 2024 00:29:46.509 read: IOPS=115, BW=115MiB/s (121MB/s)(1154MiB/10028msec) 00:29:46.509 slat (usec): min=59, max=123235, avg=8664.52, stdev=15263.34 00:29:46.509 clat (msec): min=25, max=2276, avg=1022.24, stdev=526.35 00:29:46.509 lat (msec): min=31, max=2312, avg=1030.90, stdev=529.49 00:29:46.509 clat percentiles (msec): 00:29:46.509 | 1.00th=[ 63], 5.00th=[ 236], 10.00th=[ 359], 20.00th=[ 477], 00:29:46.509 | 30.00th=[ 776], 40.00th=[ 894], 50.00th=[ 936], 60.00th=[ 1036], 00:29:46.509 | 70.00th=[ 1217], 80.00th=[ 1435], 90.00th=[ 1838], 95.00th=[ 2056], 00:29:46.509 | 99.00th=[ 2265], 99.50th=[ 2265], 99.90th=[ 2265], 99.95th=[ 2265], 00:29:46.509 | 99.99th=[ 2265] 00:29:46.510 bw ( KiB/s): min=43008, max=276480, per=3.91%, avg=116869.11, stdev=62928.79, samples=18 00:29:46.510 iops : min= 42, max= 270, avg=114.11, stdev=61.44, samples=18 00:29:46.510 lat (msec) : 50=0.26%, 100=2.08%, 250=2.86%, 500=16.12%, 750=5.37% 00:29:46.510 lat (msec) : 1000=29.81%, 2000=37.87%, >=2000=5.63% 00:29:46.510 cpu : usr=0.06%, sys=1.17%, ctx=1921, majf=0, minf=32769 00:29:46.510 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:29:46.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.510 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:46.510 issued rwts: total=1154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:46.510 00:29:46.510 Run status group 0 (all jobs): 00:29:46.510 READ: bw=2919MiB/s (3061MB/s), 1317KiB/s-203MiB/s (1349kB/s-213MB/s), io=40.2GiB (43.2GB), run=10014-14118msec 00:29:46.510 00:29:46.510 Disk stats (read/write): 00:29:46.510 nvme0n1: ios=37289/0, merge=0/0, ticks=7654171/0, in_queue=7654171, util=98.97% 00:29:46.510 nvme1n1: ios=66725/0, merge=0/0, ticks=8031430/0, in_queue=8031430, util=99.13% 00:29:46.510 nvme2n1: ios=51051/0, merge=0/0, ticks=8578977/0, in_queue=8578977, util=99.19% 00:29:46.510 nvme3n1: ios=25282/0, merge=0/0, ticks=7933731/0, in_queue=7933731, util=99.16% 00:29:46.510 nvme4n1: ios=45912/0, merge=0/0, ticks=7444448/0, in_queue=7444448, util=99.17% 00:29:46.510 nvme5n1: ios=102750/0, merge=0/0, ticks=7661089/0, in_queue=7661089, util=99.29% 00:29:46.510 14:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:29:46.510 14:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:29:46.510 14:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:46.510 14:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:29:47.511 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:29:47.511 14:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:29:47.511 14:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:29:47.511 14:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:47.511 14:00:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:47.511 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:48.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:48.448 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:29:48.448 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:29:48.448 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:48.448 14:00:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:48.448 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:29:49.381 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:29:49.381 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:29:49.381 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:29:49.381 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:49.381 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:29:49.381 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:29:49.381 14:00:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:49.381 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:29:49.381 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:49.381 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.381 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:49.381 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.381 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:49.381 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:29:50.315 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:29:50.315 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:29:50.315 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:29:50.315 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:50.315 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:29:50.315 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:29:50.315 14:00:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:50.315 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:29:50.315 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:50.315 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.315 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:50.315 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.315 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:50.315 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:29:51.254 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:29:51.254 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:29:51.254 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:29:51.255 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:51.255 14:00:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:51.255 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:29:52.188 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.188 14:00:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.188 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:52.188 rmmod nvme_rdma 00:29:52.188 rmmod nvme_fabrics 00:29:52.446 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.446 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:29:52.446 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 1833982 ']' 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 1833982 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 1833982 ']' 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 1833982 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833982 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833982' 00:29:52.447 killing process with pid 1833982 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 1833982 00:29:52.447 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 1833982 00:29:52.705 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.705 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:52.705 00:29:52.705 real 0m34.344s 00:29:52.705 user 1m59.817s 00:29:52.705 sys 0m14.840s 00:29:52.705 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.705 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:52.706 ************************************ 00:29:52.706 END TEST nvmf_srq_overwhelm 00:29:52.706 ************************************ 00:29:52.706 14:00:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:29:52.706 14:00:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:52.706 14:00:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.706 14:00:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:52.706 ************************************ 00:29:52.706 START TEST nvmf_shutdown 00:29:52.706 ************************************ 00:29:52.706 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:29:52.965 * Looking for test storage... 00:29:52.965 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:52.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.965 --rc genhtml_branch_coverage=1 00:29:52.965 --rc genhtml_function_coverage=1 00:29:52.965 --rc genhtml_legend=1 00:29:52.965 --rc geninfo_all_blocks=1 00:29:52.965 --rc geninfo_unexecuted_blocks=1 00:29:52.965 00:29:52.965 ' 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:52.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.965 --rc genhtml_branch_coverage=1 00:29:52.965 --rc genhtml_function_coverage=1 00:29:52.965 --rc genhtml_legend=1 00:29:52.965 --rc geninfo_all_blocks=1 00:29:52.965 --rc geninfo_unexecuted_blocks=1 00:29:52.965 00:29:52.965 ' 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:52.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.965 --rc genhtml_branch_coverage=1 00:29:52.965 --rc genhtml_function_coverage=1 00:29:52.965 --rc genhtml_legend=1 00:29:52.965 --rc geninfo_all_blocks=1 00:29:52.965 --rc geninfo_unexecuted_blocks=1 00:29:52.965 00:29:52.965 ' 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:52.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.965 --rc genhtml_branch_coverage=1 00:29:52.965 --rc genhtml_function_coverage=1 00:29:52.965 --rc genhtml_legend=1 00:29:52.965 --rc geninfo_all_blocks=1 00:29:52.965 --rc geninfo_unexecuted_blocks=1 00:29:52.965 00:29:52.965 ' 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.965 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.966 ************************************ 00:29:52.966 START TEST nvmf_shutdown_tc1 00:29:52.966 ************************************ 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.966 14:00:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:29:59.540 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:29:59.540 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:29:59.540 Found net devices under 0000:18:00.0: mlx_0_0 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:29:59.540 Found net devices under 0000:18:00.1: mlx_0_1 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:59.540 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:59.541 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:59.541 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:29:59.541 altname enp24s0f0np0 00:29:59.541 altname ens785f0np0 00:29:59.541 inet 192.168.100.8/24 scope global mlx_0_0 00:29:59.541 valid_lft forever preferred_lft forever 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:59.541 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:59.541 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:29:59.541 altname enp24s0f1np1 00:29:59.541 altname ens785f1np1 00:29:59.541 inet 192.168.100.9/24 scope global mlx_0_1 00:29:59.541 valid_lft forever preferred_lft forever 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:59.541 192.168.100.9' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:59.541 192.168.100.9' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:59.541 192.168.100.9' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1842257 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1842257 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1842257 ']' 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.541 14:00:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.542 [2024-12-05 14:00:58.870078] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:29:59.542 [2024-12-05 14:00:58.870127] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.542 [2024-12-05 14:00:58.944200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.542 [2024-12-05 14:00:58.965971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.542 [2024-12-05 14:00:58.966010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.542 [2024-12-05 14:00:58.966017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.542 [2024-12-05 14:00:58.966022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.542 [2024-12-05 14:00:58.966028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.542 [2024-12-05 14:00:58.967485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.542 [2024-12-05 14:00:58.967592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.542 [2024-12-05 14:00:58.967699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.542 [2024-12-05 14:00:58.967700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.542 [2024-12-05 14:00:59.121659] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1408230/0x140c720) succeed. 00:29:59.542 [2024-12-05 14:00:59.129947] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14098c0/0x144ddc0) succeed. 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.542 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.542 Malloc1 00:29:59.542 [2024-12-05 14:00:59.353471] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:59.542 Malloc2 00:29:59.801 Malloc3 00:29:59.801 Malloc4 00:29:59.801 Malloc5 00:29:59.801 Malloc6 00:29:59.801 Malloc7 00:29:59.801 Malloc8 00:30:00.061 Malloc9 00:30:00.061 Malloc10 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1842450 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1842450 /var/tmp/bdevperf.sock 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1842450 ']' 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:00.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.061 { 00:30:00.061 "params": { 00:30:00.061 "name": "Nvme$subsystem", 00:30:00.061 "trtype": "$TEST_TRANSPORT", 00:30:00.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.061 "adrfam": "ipv4", 00:30:00.061 "trsvcid": "$NVMF_PORT", 00:30:00.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.061 "hdgst": ${hdgst:-false}, 00:30:00.061 "ddgst": ${ddgst:-false} 00:30:00.061 }, 00:30:00.061 "method": "bdev_nvme_attach_controller" 00:30:00.061 } 00:30:00.061 EOF 00:30:00.061 )") 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.061 { 00:30:00.061 "params": { 00:30:00.061 "name": "Nvme$subsystem", 00:30:00.061 "trtype": "$TEST_TRANSPORT", 00:30:00.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.061 "adrfam": "ipv4", 00:30:00.061 "trsvcid": "$NVMF_PORT", 00:30:00.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.061 "hdgst": ${hdgst:-false}, 00:30:00.061 "ddgst": ${ddgst:-false} 00:30:00.061 }, 00:30:00.061 "method": "bdev_nvme_attach_controller" 00:30:00.061 } 00:30:00.061 EOF 00:30:00.061 )") 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.061 { 00:30:00.061 "params": { 00:30:00.061 "name": "Nvme$subsystem", 00:30:00.061 "trtype": "$TEST_TRANSPORT", 00:30:00.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.061 "adrfam": "ipv4", 00:30:00.061 "trsvcid": "$NVMF_PORT", 00:30:00.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.061 "hdgst": ${hdgst:-false}, 00:30:00.061 "ddgst": ${ddgst:-false} 00:30:00.061 }, 00:30:00.061 "method": "bdev_nvme_attach_controller" 00:30:00.061 } 00:30:00.061 EOF 00:30:00.061 )") 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.061 { 00:30:00.061 "params": { 00:30:00.061 "name": "Nvme$subsystem", 00:30:00.061 "trtype": "$TEST_TRANSPORT", 00:30:00.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.061 "adrfam": "ipv4", 00:30:00.061 "trsvcid": "$NVMF_PORT", 00:30:00.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.061 "hdgst": ${hdgst:-false}, 00:30:00.061 "ddgst": ${ddgst:-false} 00:30:00.061 }, 00:30:00.061 "method": "bdev_nvme_attach_controller" 00:30:00.061 } 00:30:00.061 EOF 00:30:00.061 )") 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.061 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.061 { 00:30:00.061 "params": { 00:30:00.062 "name": "Nvme$subsystem", 00:30:00.062 "trtype": "$TEST_TRANSPORT", 00:30:00.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "$NVMF_PORT", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.062 "hdgst": ${hdgst:-false}, 00:30:00.062 "ddgst": ${ddgst:-false} 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 } 00:30:00.062 EOF 00:30:00.062 )") 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.062 { 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme$subsystem", 00:30:00.062 "trtype": "$TEST_TRANSPORT", 00:30:00.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "$NVMF_PORT", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.062 "hdgst": ${hdgst:-false}, 00:30:00.062 "ddgst": ${ddgst:-false} 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 } 00:30:00.062 EOF 00:30:00.062 )") 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.062 [2024-12-05 14:00:59.822083] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:00.062 [2024-12-05 14:00:59.822127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.062 { 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme$subsystem", 00:30:00.062 "trtype": "$TEST_TRANSPORT", 00:30:00.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "$NVMF_PORT", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.062 "hdgst": ${hdgst:-false}, 00:30:00.062 "ddgst": ${ddgst:-false} 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 } 00:30:00.062 EOF 00:30:00.062 )") 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.062 { 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme$subsystem", 00:30:00.062 "trtype": "$TEST_TRANSPORT", 00:30:00.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "$NVMF_PORT", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.062 "hdgst": ${hdgst:-false}, 00:30:00.062 "ddgst": ${ddgst:-false} 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 } 00:30:00.062 EOF 00:30:00.062 )") 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.062 { 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme$subsystem", 00:30:00.062 "trtype": "$TEST_TRANSPORT", 00:30:00.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "$NVMF_PORT", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.062 "hdgst": ${hdgst:-false}, 00:30:00.062 "ddgst": ${ddgst:-false} 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 } 00:30:00.062 EOF 00:30:00.062 )") 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:00.062 { 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme$subsystem", 00:30:00.062 "trtype": "$TEST_TRANSPORT", 00:30:00.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "$NVMF_PORT", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.062 "hdgst": ${hdgst:-false}, 00:30:00.062 "ddgst": ${ddgst:-false} 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 } 00:30:00.062 EOF 00:30:00.062 )") 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:00.062 14:00:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme1", 00:30:00.062 "trtype": "rdma", 00:30:00.062 "traddr": "192.168.100.8", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "4420", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:00.062 "hdgst": false, 00:30:00.062 "ddgst": false 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 },{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme2", 00:30:00.062 "trtype": "rdma", 00:30:00.062 "traddr": "192.168.100.8", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "4420", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:00.062 "hdgst": false, 00:30:00.062 "ddgst": false 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 },{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme3", 00:30:00.062 "trtype": "rdma", 00:30:00.062 "traddr": "192.168.100.8", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "4420", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:00.062 "hdgst": false, 00:30:00.062 "ddgst": false 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 },{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme4", 00:30:00.062 "trtype": "rdma", 00:30:00.062 "traddr": "192.168.100.8", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "4420", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:00.062 "hdgst": false, 00:30:00.062 "ddgst": false 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 },{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme5", 00:30:00.062 "trtype": "rdma", 00:30:00.062 "traddr": "192.168.100.8", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "4420", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:00.062 "hdgst": false, 00:30:00.062 "ddgst": false 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 },{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme6", 00:30:00.062 "trtype": "rdma", 00:30:00.062 "traddr": "192.168.100.8", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "4420", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:00.062 "hdgst": false, 00:30:00.062 "ddgst": false 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 },{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme7", 00:30:00.062 "trtype": "rdma", 00:30:00.062 "traddr": "192.168.100.8", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "4420", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:00.062 "hdgst": false, 00:30:00.062 "ddgst": false 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 },{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme8", 00:30:00.062 "trtype": "rdma", 00:30:00.062 "traddr": "192.168.100.8", 00:30:00.062 "adrfam": "ipv4", 00:30:00.062 "trsvcid": "4420", 00:30:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:00.062 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:00.062 "hdgst": false, 00:30:00.062 "ddgst": false 00:30:00.062 }, 00:30:00.062 "method": "bdev_nvme_attach_controller" 00:30:00.062 },{ 00:30:00.062 "params": { 00:30:00.062 "name": "Nvme9", 00:30:00.063 "trtype": "rdma", 00:30:00.063 "traddr": "192.168.100.8", 00:30:00.063 "adrfam": "ipv4", 00:30:00.063 "trsvcid": "4420", 00:30:00.063 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:00.063 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:00.063 "hdgst": false, 00:30:00.063 "ddgst": false 00:30:00.063 }, 00:30:00.063 "method": "bdev_nvme_attach_controller" 00:30:00.063 },{ 00:30:00.063 "params": { 00:30:00.063 "name": "Nvme10", 00:30:00.063 "trtype": "rdma", 00:30:00.063 "traddr": "192.168.100.8", 00:30:00.063 "adrfam": "ipv4", 00:30:00.063 "trsvcid": "4420", 00:30:00.063 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:00.063 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:00.063 "hdgst": false, 00:30:00.063 "ddgst": false 00:30:00.063 }, 00:30:00.063 "method": "bdev_nvme_attach_controller" 00:30:00.063 }' 00:30:00.063 [2024-12-05 14:00:59.895819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.321 [2024-12-05 14:00:59.917262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1842450 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:01.258 14:01:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:02.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1842450 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1842257 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.196 { 00:30:02.196 "params": { 00:30:02.196 "name": "Nvme$subsystem", 00:30:02.196 "trtype": "$TEST_TRANSPORT", 00:30:02.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.196 "adrfam": "ipv4", 00:30:02.196 "trsvcid": "$NVMF_PORT", 00:30:02.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.196 "hdgst": ${hdgst:-false}, 00:30:02.196 "ddgst": ${ddgst:-false} 00:30:02.196 }, 00:30:02.196 "method": "bdev_nvme_attach_controller" 00:30:02.196 } 00:30:02.196 EOF 00:30:02.196 )") 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.196 { 00:30:02.196 "params": { 00:30:02.196 "name": "Nvme$subsystem", 00:30:02.196 "trtype": "$TEST_TRANSPORT", 00:30:02.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.196 "adrfam": "ipv4", 00:30:02.196 "trsvcid": "$NVMF_PORT", 00:30:02.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.196 "hdgst": ${hdgst:-false}, 00:30:02.196 "ddgst": ${ddgst:-false} 00:30:02.196 }, 00:30:02.196 "method": "bdev_nvme_attach_controller" 00:30:02.196 } 00:30:02.196 EOF 00:30:02.196 )") 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.196 { 00:30:02.196 "params": { 00:30:02.196 "name": "Nvme$subsystem", 00:30:02.196 "trtype": "$TEST_TRANSPORT", 00:30:02.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.196 "adrfam": "ipv4", 00:30:02.196 "trsvcid": "$NVMF_PORT", 00:30:02.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.196 "hdgst": ${hdgst:-false}, 00:30:02.196 "ddgst": ${ddgst:-false} 00:30:02.196 }, 00:30:02.196 "method": "bdev_nvme_attach_controller" 00:30:02.196 } 00:30:02.196 EOF 00:30:02.196 )") 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.196 { 00:30:02.196 "params": { 00:30:02.196 "name": "Nvme$subsystem", 00:30:02.196 "trtype": "$TEST_TRANSPORT", 00:30:02.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.196 "adrfam": "ipv4", 00:30:02.196 "trsvcid": "$NVMF_PORT", 00:30:02.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.196 "hdgst": ${hdgst:-false}, 00:30:02.196 "ddgst": ${ddgst:-false} 00:30:02.196 }, 00:30:02.196 "method": "bdev_nvme_attach_controller" 00:30:02.196 } 00:30:02.196 EOF 00:30:02.196 )") 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.196 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.196 { 00:30:02.196 "params": { 00:30:02.196 "name": "Nvme$subsystem", 00:30:02.196 "trtype": "$TEST_TRANSPORT", 00:30:02.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.196 "adrfam": "ipv4", 00:30:02.196 "trsvcid": "$NVMF_PORT", 00:30:02.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.197 "hdgst": ${hdgst:-false}, 00:30:02.197 "ddgst": ${ddgst:-false} 00:30:02.197 }, 00:30:02.197 "method": "bdev_nvme_attach_controller" 00:30:02.197 } 00:30:02.197 EOF 00:30:02.197 )") 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.197 { 00:30:02.197 "params": { 00:30:02.197 "name": "Nvme$subsystem", 00:30:02.197 "trtype": "$TEST_TRANSPORT", 00:30:02.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.197 "adrfam": "ipv4", 00:30:02.197 "trsvcid": "$NVMF_PORT", 00:30:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.197 "hdgst": ${hdgst:-false}, 00:30:02.197 "ddgst": ${ddgst:-false} 00:30:02.197 }, 00:30:02.197 "method": "bdev_nvme_attach_controller" 00:30:02.197 } 00:30:02.197 EOF 00:30:02.197 )") 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.197 { 00:30:02.197 "params": { 00:30:02.197 "name": "Nvme$subsystem", 00:30:02.197 "trtype": "$TEST_TRANSPORT", 00:30:02.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.197 "adrfam": "ipv4", 00:30:02.197 "trsvcid": "$NVMF_PORT", 00:30:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.197 "hdgst": ${hdgst:-false}, 00:30:02.197 "ddgst": ${ddgst:-false} 00:30:02.197 }, 00:30:02.197 "method": "bdev_nvme_attach_controller" 00:30:02.197 } 00:30:02.197 EOF 00:30:02.197 )") 00:30:02.197 [2024-12-05 14:01:01.826048] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:02.197 [2024-12-05 14:01:01.826095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842824 ] 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.197 { 00:30:02.197 "params": { 00:30:02.197 "name": "Nvme$subsystem", 00:30:02.197 "trtype": "$TEST_TRANSPORT", 00:30:02.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.197 "adrfam": "ipv4", 00:30:02.197 "trsvcid": "$NVMF_PORT", 00:30:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.197 "hdgst": ${hdgst:-false}, 00:30:02.197 "ddgst": ${ddgst:-false} 00:30:02.197 }, 00:30:02.197 "method": "bdev_nvme_attach_controller" 00:30:02.197 } 00:30:02.197 EOF 00:30:02.197 )") 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.197 { 00:30:02.197 "params": { 00:30:02.197 "name": "Nvme$subsystem", 00:30:02.197 "trtype": "$TEST_TRANSPORT", 00:30:02.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.197 "adrfam": "ipv4", 00:30:02.197 "trsvcid": "$NVMF_PORT", 00:30:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.197 "hdgst": ${hdgst:-false}, 00:30:02.197 "ddgst": ${ddgst:-false} 00:30:02.197 }, 00:30:02.197 "method": "bdev_nvme_attach_controller" 00:30:02.197 } 00:30:02.197 EOF 00:30:02.197 )") 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:02.197 { 00:30:02.197 "params": { 00:30:02.197 "name": "Nvme$subsystem", 00:30:02.197 "trtype": "$TEST_TRANSPORT", 00:30:02.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:02.197 "adrfam": "ipv4", 00:30:02.197 "trsvcid": "$NVMF_PORT", 00:30:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:02.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:02.197 "hdgst": ${hdgst:-false}, 00:30:02.197 "ddgst": ${ddgst:-false} 00:30:02.197 }, 00:30:02.197 "method": "bdev_nvme_attach_controller" 00:30:02.197 } 00:30:02.197 EOF 00:30:02.197 )") 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:02.197 14:01:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:02.197 "params": { 00:30:02.197 "name": "Nvme1", 00:30:02.197 "trtype": "rdma", 00:30:02.197 "traddr": "192.168.100.8", 00:30:02.197 "adrfam": "ipv4", 00:30:02.197 "trsvcid": "4420", 00:30:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:02.197 "hdgst": false, 00:30:02.197 "ddgst": false 00:30:02.197 }, 00:30:02.197 "method": "bdev_nvme_attach_controller" 00:30:02.197 },{ 00:30:02.197 "params": { 00:30:02.197 "name": "Nvme2", 00:30:02.197 "trtype": "rdma", 00:30:02.197 "traddr": "192.168.100.8", 00:30:02.197 "adrfam": "ipv4", 00:30:02.197 "trsvcid": "4420", 00:30:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 },{ 00:30:02.198 "params": { 00:30:02.198 "name": "Nvme3", 00:30:02.198 "trtype": "rdma", 00:30:02.198 "traddr": "192.168.100.8", 00:30:02.198 "adrfam": "ipv4", 00:30:02.198 "trsvcid": "4420", 00:30:02.198 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 },{ 00:30:02.198 "params": { 00:30:02.198 "name": "Nvme4", 00:30:02.198 "trtype": "rdma", 00:30:02.198 "traddr": "192.168.100.8", 00:30:02.198 "adrfam": "ipv4", 00:30:02.198 "trsvcid": "4420", 00:30:02.198 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 },{ 00:30:02.198 "params": { 00:30:02.198 "name": "Nvme5", 00:30:02.198 "trtype": "rdma", 00:30:02.198 "traddr": "192.168.100.8", 00:30:02.198 "adrfam": "ipv4", 00:30:02.198 "trsvcid": "4420", 00:30:02.198 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 },{ 00:30:02.198 "params": { 00:30:02.198 "name": "Nvme6", 00:30:02.198 "trtype": "rdma", 00:30:02.198 "traddr": "192.168.100.8", 00:30:02.198 "adrfam": "ipv4", 00:30:02.198 "trsvcid": "4420", 00:30:02.198 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 },{ 00:30:02.198 "params": { 00:30:02.198 "name": "Nvme7", 00:30:02.198 "trtype": "rdma", 00:30:02.198 "traddr": "192.168.100.8", 00:30:02.198 "adrfam": "ipv4", 00:30:02.198 "trsvcid": "4420", 00:30:02.198 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 },{ 00:30:02.198 "params": { 00:30:02.198 "name": "Nvme8", 00:30:02.198 "trtype": "rdma", 00:30:02.198 "traddr": "192.168.100.8", 00:30:02.198 "adrfam": "ipv4", 00:30:02.198 "trsvcid": "4420", 00:30:02.198 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 },{ 00:30:02.198 "params": { 00:30:02.198 "name": "Nvme9", 00:30:02.198 "trtype": "rdma", 00:30:02.198 "traddr": "192.168.100.8", 00:30:02.198 "adrfam": "ipv4", 00:30:02.198 "trsvcid": "4420", 00:30:02.198 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 },{ 00:30:02.198 "params": { 00:30:02.198 "name": "Nvme10", 00:30:02.198 "trtype": "rdma", 00:30:02.198 "traddr": "192.168.100.8", 00:30:02.198 "adrfam": "ipv4", 00:30:02.198 "trsvcid": "4420", 00:30:02.198 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:02.198 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:02.198 "hdgst": false, 00:30:02.198 "ddgst": false 00:30:02.198 }, 00:30:02.198 "method": "bdev_nvme_attach_controller" 00:30:02.198 }' 00:30:02.198 [2024-12-05 14:01:01.901570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.198 [2024-12-05 14:01:01.922900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.135 Running I/O for 1 seconds... 00:30:04.331 3651.00 IOPS, 228.19 MiB/s 00:30:04.331 Latency(us) 00:30:04.331 [2024-12-05T13:01:04.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.331 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.331 Verification LBA range: start 0x0 length 0x400 00:30:04.331 Nvme1n1 : 1.16 387.80 24.24 0.00 0.00 159330.72 34952.53 196510.91 00:30:04.331 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.331 Verification LBA range: start 0x0 length 0x400 00:30:04.331 Nvme2n1 : 1.16 390.03 24.38 0.00 0.00 156262.56 4563.25 187190.23 00:30:04.331 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.331 Verification LBA range: start 0x0 length 0x400 00:30:04.331 Nvme3n1 : 1.16 409.57 25.60 0.00 0.00 146223.68 4514.70 133596.35 00:30:04.331 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.331 Verification LBA range: start 0x0 length 0x400 00:30:04.331 Nvme4n1 : 1.16 401.50 25.09 0.00 0.00 146222.69 8786.68 124275.67 00:30:04.331 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.331 Verification LBA range: start 0x0 length 0x400 00:30:04.332 Nvme5n1 : 1.16 400.35 25.02 0.00 0.00 144609.41 12621.75 114178.28 00:30:04.332 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.332 Verification LBA range: start 0x0 length 0x400 00:30:04.332 Nvme6n1 : 1.16 394.92 24.68 0.00 0.00 143985.02 16796.63 103304.15 00:30:04.332 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.332 Verification LBA range: start 0x0 length 0x400 00:30:04.332 Nvme7n1 : 1.16 439.89 27.49 0.00 0.00 133050.45 2682.12 95148.56 00:30:04.332 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.332 Verification LBA range: start 0x0 length 0x400 00:30:04.332 Nvme8n1 : 1.17 439.47 27.47 0.00 0.00 131379.91 3046.21 92818.39 00:30:04.332 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.332 Verification LBA range: start 0x0 length 0x400 00:30:04.332 Nvme9n1 : 1.17 438.38 27.40 0.00 0.00 130115.46 5121.52 94371.84 00:30:04.332 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.332 Verification LBA range: start 0x0 length 0x400 00:30:04.332 Nvme10n1 : 1.17 328.41 20.53 0.00 0.00 171230.29 3082.62 337097.77 00:30:04.332 [2024-12-05T13:01:04.185Z] =================================================================================================================== 00:30:04.332 [2024-12-05T13:01:04.185Z] Total : 4030.32 251.89 0.00 0.00 145291.02 2682.12 337097.77 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:04.590 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:04.591 rmmod nvme_rdma 00:30:04.591 rmmod nvme_fabrics 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1842257 ']' 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1842257 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1842257 ']' 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1842257 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1842257 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1842257' 00:30:04.591 killing process with pid 1842257 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1842257 00:30:04.591 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1842257 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:05.173 00:30:05.173 real 0m11.990s 00:30:05.173 user 0m27.254s 00:30:05.173 sys 0m5.532s 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.173 ************************************ 00:30:05.173 END TEST nvmf_shutdown_tc1 00:30:05.173 ************************************ 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:05.173 ************************************ 00:30:05.173 START TEST nvmf_shutdown_tc2 00:30:05.173 ************************************ 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:05.173 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:05.173 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:05.173 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:05.174 Found net devices under 0000:18:00.0: mlx_0_0 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:05.174 Found net devices under 0000:18:00.1: mlx_0_1 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:05.174 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:05.174 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:30:05.174 altname enp24s0f0np0 00:30:05.174 altname ens785f0np0 00:30:05.174 inet 192.168.100.8/24 scope global mlx_0_0 00:30:05.174 valid_lft forever preferred_lft forever 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:05.174 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:05.174 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:30:05.174 altname enp24s0f1np1 00:30:05.174 altname ens785f1np1 00:30:05.174 inet 192.168.100.9/24 scope global mlx_0_1 00:30:05.174 valid_lft forever preferred_lft forever 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:05.174 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:05.175 192.168.100.9' 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:05.175 192.168.100.9' 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:05.175 192.168.100.9' 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:30:05.175 14:01:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:05.175 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:05.175 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:05.175 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:05.175 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:05.175 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1843576 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1843576 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1843576 ']' 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.433 [2024-12-05 14:01:05.082190] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:05.433 [2024-12-05 14:01:05.082235] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.433 [2024-12-05 14:01:05.154523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.433 [2024-12-05 14:01:05.177054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.433 [2024-12-05 14:01:05.177095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.433 [2024-12-05 14:01:05.177102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.433 [2024-12-05 14:01:05.177107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.433 [2024-12-05 14:01:05.177112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.433 [2024-12-05 14:01:05.178471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.433 [2024-12-05 14:01:05.178560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.433 [2024-12-05 14:01:05.178668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.433 [2024-12-05 14:01:05.178669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.433 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.692 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.692 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:05.692 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.692 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.693 [2024-12-05 14:01:05.332294] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x166e230/0x1672720) succeed. 00:30:05.693 [2024-12-05 14:01:05.340445] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x166f8c0/0x16b3dc0) succeed. 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.693 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.693 Malloc1 00:30:05.951 [2024-12-05 14:01:05.557213] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:05.951 Malloc2 00:30:05.951 Malloc3 00:30:05.951 Malloc4 00:30:05.951 Malloc5 00:30:05.951 Malloc6 00:30:05.951 Malloc7 00:30:06.210 Malloc8 00:30:06.210 Malloc9 00:30:06.210 Malloc10 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1843683 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1843683 /var/tmp/bdevperf.sock 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1843683 ']' 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:06.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.210 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.210 { 00:30:06.210 "params": { 00:30:06.210 "name": "Nvme$subsystem", 00:30:06.210 "trtype": "$TEST_TRANSPORT", 00:30:06.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.210 "adrfam": "ipv4", 00:30:06.210 "trsvcid": "$NVMF_PORT", 00:30:06.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.210 "hdgst": ${hdgst:-false}, 00:30:06.210 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 [2024-12-05 14:01:06.028781] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:06.211 [2024-12-05 14:01:06.028826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843683 ] 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.211 { 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme$subsystem", 00:30:06.211 "trtype": "$TEST_TRANSPORT", 00:30:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "$NVMF_PORT", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.211 "hdgst": ${hdgst:-false}, 00:30:06.211 "ddgst": ${ddgst:-false} 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 } 00:30:06.211 EOF 00:30:06.211 )") 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:30:06.211 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme1", 00:30:06.211 "trtype": "rdma", 00:30:06.211 "traddr": "192.168.100.8", 00:30:06.211 "adrfam": "ipv4", 00:30:06.211 "trsvcid": "4420", 00:30:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.211 "hdgst": false, 00:30:06.211 "ddgst": false 00:30:06.211 }, 00:30:06.211 "method": "bdev_nvme_attach_controller" 00:30:06.211 },{ 00:30:06.211 "params": { 00:30:06.211 "name": "Nvme2", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 },{ 00:30:06.212 "params": { 00:30:06.212 "name": "Nvme3", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 },{ 00:30:06.212 "params": { 00:30:06.212 "name": "Nvme4", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 },{ 00:30:06.212 "params": { 00:30:06.212 "name": "Nvme5", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 },{ 00:30:06.212 "params": { 00:30:06.212 "name": "Nvme6", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 },{ 00:30:06.212 "params": { 00:30:06.212 "name": "Nvme7", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 },{ 00:30:06.212 "params": { 00:30:06.212 "name": "Nvme8", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 },{ 00:30:06.212 "params": { 00:30:06.212 "name": "Nvme9", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 },{ 00:30:06.212 "params": { 00:30:06.212 "name": "Nvme10", 00:30:06.212 "trtype": "rdma", 00:30:06.212 "traddr": "192.168.100.8", 00:30:06.212 "adrfam": "ipv4", 00:30:06.212 "trsvcid": "4420", 00:30:06.212 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:06.212 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:06.212 "hdgst": false, 00:30:06.212 "ddgst": false 00:30:06.212 }, 00:30:06.212 "method": "bdev_nvme_attach_controller" 00:30:06.212 }' 00:30:06.470 [2024-12-05 14:01:06.104435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.470 [2024-12-05 14:01:06.125664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.406 Running I/O for 10 seconds... 00:30:07.406 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.406 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:07.406 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:07.406 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.406 14:01:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.406 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:07.665 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.665 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=19 00:30:07.665 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 19 -ge 100 ']' 00:30:07.665 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=179 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 179 -ge 100 ']' 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1843683 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1843683 ']' 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1843683 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843683 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843683' 00:30:07.924 killing process with pid 1843683 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1843683 00:30:07.924 14:01:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1843683 00:30:07.924 Received shutdown signal, test time was about 0.781053 seconds 00:30:07.924 00:30:07.924 Latency(us) 00:30:07.924 [2024-12-05T13:01:07.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.924 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.924 Verification LBA range: start 0x0 length 0x400 00:30:07.924 Nvme1n1 : 0.77 396.00 24.75 0.00 0.00 158659.08 7427.41 217482.43 00:30:07.924 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.924 Verification LBA range: start 0x0 length 0x400 00:30:07.924 Nvme2n1 : 0.77 416.23 26.01 0.00 0.00 147951.92 5946.79 154567.87 00:30:07.924 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.925 Verification LBA range: start 0x0 length 0x400 00:30:07.925 Nvme3n1 : 0.77 415.70 25.98 0.00 0.00 145196.68 7912.87 149907.53 00:30:07.925 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.925 Verification LBA range: start 0x0 length 0x400 00:30:07.925 Nvme4n1 : 0.77 415.12 25.94 0.00 0.00 142661.71 8058.50 143693.75 00:30:07.925 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.925 Verification LBA range: start 0x0 length 0x400 00:30:07.925 Nvme5n1 : 0.77 414.40 25.90 0.00 0.00 140499.17 8592.50 133596.35 00:30:07.925 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.925 Verification LBA range: start 0x0 length 0x400 00:30:07.925 Nvme6n1 : 0.77 413.81 25.86 0.00 0.00 137417.84 8932.31 126605.84 00:30:07.925 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.925 Verification LBA range: start 0x0 length 0x400 00:30:07.925 Nvme7n1 : 0.77 413.21 25.83 0.00 0.00 134904.95 9223.59 118838.61 00:30:07.925 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.925 Verification LBA range: start 0x0 length 0x400 00:30:07.925 Nvme8n1 : 0.78 412.63 25.79 0.00 0.00 132134.38 9466.31 111848.11 00:30:07.925 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.925 Verification LBA range: start 0x0 length 0x400 00:30:07.925 Nvme9n1 : 0.78 411.87 25.74 0.00 0.00 130449.79 10145.94 98255.45 00:30:07.925 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.925 Verification LBA range: start 0x0 length 0x400 00:30:07.925 Nvme10n1 : 0.78 328.02 20.50 0.00 0.00 159841.80 2682.12 223696.21 00:30:07.925 [2024-12-05T13:01:07.778Z] =================================================================================================================== 00:30:07.925 [2024-12-05T13:01:07.778Z] Total : 4037.00 252.31 0.00 0.00 142545.23 2682.12 223696.21 00:30:08.184 14:01:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1843576 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:09.561 rmmod nvme_rdma 00:30:09.561 rmmod nvme_fabrics 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:09.561 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1843576 ']' 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1843576 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1843576 ']' 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1843576 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843576 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843576' 00:30:09.562 killing process with pid 1843576 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1843576 00:30:09.562 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1843576 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:09.823 00:30:09.823 real 0m4.749s 00:30:09.823 user 0m19.101s 00:30:09.823 sys 0m0.981s 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:09.823 ************************************ 00:30:09.823 END TEST nvmf_shutdown_tc2 00:30:09.823 ************************************ 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:09.823 ************************************ 00:30:09.823 START TEST nvmf_shutdown_tc3 00:30:09.823 ************************************ 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.823 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:09.824 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:09.824 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:09.824 Found net devices under 0000:18:00.0: mlx_0_0 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:09.824 Found net devices under 0000:18:00.1: mlx_0_1 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:09.824 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:10.084 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:10.084 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:30:10.084 altname enp24s0f0np0 00:30:10.084 altname ens785f0np0 00:30:10.084 inet 192.168.100.8/24 scope global mlx_0_0 00:30:10.084 valid_lft forever preferred_lft forever 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:10.084 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:10.084 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:30:10.084 altname enp24s0f1np1 00:30:10.084 altname ens785f1np1 00:30:10.084 inet 192.168.100.9/24 scope global mlx_0_1 00:30:10.084 valid_lft forever preferred_lft forever 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:30:10.084 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:10.085 192.168.100.9' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:10.085 192.168.100.9' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:10.085 192.168.100.9' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1844578 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1844578 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1844578 ']' 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.085 14:01:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.085 [2024-12-05 14:01:09.910742] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:10.085 [2024-12-05 14:01:09.910786] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.344 [2024-12-05 14:01:09.985553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.344 [2024-12-05 14:01:10.009076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.344 [2024-12-05 14:01:10.009112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.344 [2024-12-05 14:01:10.009119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.344 [2024-12-05 14:01:10.009124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.344 [2024-12-05 14:01:10.009129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.344 [2024-12-05 14:01:10.010397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.344 [2024-12-05 14:01:10.010468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:10.344 [2024-12-05 14:01:10.010576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.344 [2024-12-05 14:01:10.010578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.344 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.344 [2024-12-05 14:01:10.165206] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x93d230/0x941720) succeed. 00:30:10.344 [2024-12-05 14:01:10.173377] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x93e8c0/0x982dc0) succeed. 00:30:10.603 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.603 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:10.603 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:10.603 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.604 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:10.604 Malloc1 00:30:10.604 [2024-12-05 14:01:10.379132] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:10.604 Malloc2 00:30:10.604 Malloc3 00:30:10.863 Malloc4 00:30:10.863 Malloc5 00:30:10.863 Malloc6 00:30:10.863 Malloc7 00:30:10.863 Malloc8 00:30:10.863 Malloc9 00:30:11.123 Malloc10 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1844702 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1844702 /var/tmp/bdevperf.sock 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1844702 ']' 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.123 { 00:30:11.123 "params": { 00:30:11.123 "name": "Nvme$subsystem", 00:30:11.123 "trtype": "$TEST_TRANSPORT", 00:30:11.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.123 "adrfam": "ipv4", 00:30:11.123 "trsvcid": "$NVMF_PORT", 00:30:11.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.123 "hdgst": ${hdgst:-false}, 00:30:11.123 "ddgst": ${ddgst:-false} 00:30:11.123 }, 00:30:11.123 "method": "bdev_nvme_attach_controller" 00:30:11.123 } 00:30:11.123 EOF 00:30:11.123 )") 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.123 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.123 { 00:30:11.123 "params": { 00:30:11.123 "name": "Nvme$subsystem", 00:30:11.123 "trtype": "$TEST_TRANSPORT", 00:30:11.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.123 "adrfam": "ipv4", 00:30:11.123 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.124 { 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme$subsystem", 00:30:11.124 "trtype": "$TEST_TRANSPORT", 00:30:11.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.124 { 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme$subsystem", 00:30:11.124 "trtype": "$TEST_TRANSPORT", 00:30:11.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.124 { 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme$subsystem", 00:30:11.124 "trtype": "$TEST_TRANSPORT", 00:30:11.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.124 { 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme$subsystem", 00:30:11.124 "trtype": "$TEST_TRANSPORT", 00:30:11.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 [2024-12-05 14:01:10.848600] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:11.124 [2024-12-05 14:01:10.848646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844702 ] 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.124 { 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme$subsystem", 00:30:11.124 "trtype": "$TEST_TRANSPORT", 00:30:11.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.124 { 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme$subsystem", 00:30:11.124 "trtype": "$TEST_TRANSPORT", 00:30:11.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.124 { 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme$subsystem", 00:30:11.124 "trtype": "$TEST_TRANSPORT", 00:30:11.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.124 { 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme$subsystem", 00:30:11.124 "trtype": "$TEST_TRANSPORT", 00:30:11.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "$NVMF_PORT", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.124 "hdgst": ${hdgst:-false}, 00:30:11.124 "ddgst": ${ddgst:-false} 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 } 00:30:11.124 EOF 00:30:11.124 )") 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:30:11.124 14:01:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme1", 00:30:11.124 "trtype": "rdma", 00:30:11.124 "traddr": "192.168.100.8", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "4420", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.124 "hdgst": false, 00:30:11.124 "ddgst": false 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 },{ 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme2", 00:30:11.124 "trtype": "rdma", 00:30:11.124 "traddr": "192.168.100.8", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "4420", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:11.124 "hdgst": false, 00:30:11.124 "ddgst": false 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 },{ 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme3", 00:30:11.124 "trtype": "rdma", 00:30:11.124 "traddr": "192.168.100.8", 00:30:11.124 "adrfam": "ipv4", 00:30:11.124 "trsvcid": "4420", 00:30:11.124 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:11.124 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:11.124 "hdgst": false, 00:30:11.124 "ddgst": false 00:30:11.124 }, 00:30:11.124 "method": "bdev_nvme_attach_controller" 00:30:11.124 },{ 00:30:11.124 "params": { 00:30:11.124 "name": "Nvme4", 00:30:11.124 "trtype": "rdma", 00:30:11.124 "traddr": "192.168.100.8", 00:30:11.125 "adrfam": "ipv4", 00:30:11.125 "trsvcid": "4420", 00:30:11.125 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:11.125 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:11.125 "hdgst": false, 00:30:11.125 "ddgst": false 00:30:11.125 }, 00:30:11.125 "method": "bdev_nvme_attach_controller" 00:30:11.125 },{ 00:30:11.125 "params": { 00:30:11.125 "name": "Nvme5", 00:30:11.125 "trtype": "rdma", 00:30:11.125 "traddr": "192.168.100.8", 00:30:11.125 "adrfam": "ipv4", 00:30:11.125 "trsvcid": "4420", 00:30:11.125 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:11.125 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:11.125 "hdgst": false, 00:30:11.125 "ddgst": false 00:30:11.125 }, 00:30:11.125 "method": "bdev_nvme_attach_controller" 00:30:11.125 },{ 00:30:11.125 "params": { 00:30:11.125 "name": "Nvme6", 00:30:11.125 "trtype": "rdma", 00:30:11.125 "traddr": "192.168.100.8", 00:30:11.125 "adrfam": "ipv4", 00:30:11.125 "trsvcid": "4420", 00:30:11.125 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:11.125 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:11.125 "hdgst": false, 00:30:11.125 "ddgst": false 00:30:11.125 }, 00:30:11.125 "method": "bdev_nvme_attach_controller" 00:30:11.125 },{ 00:30:11.125 "params": { 00:30:11.125 "name": "Nvme7", 00:30:11.125 "trtype": "rdma", 00:30:11.125 "traddr": "192.168.100.8", 00:30:11.125 "adrfam": "ipv4", 00:30:11.125 "trsvcid": "4420", 00:30:11.125 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:11.125 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:11.125 "hdgst": false, 00:30:11.125 "ddgst": false 00:30:11.125 }, 00:30:11.125 "method": "bdev_nvme_attach_controller" 00:30:11.125 },{ 00:30:11.125 "params": { 00:30:11.125 "name": "Nvme8", 00:30:11.125 "trtype": "rdma", 00:30:11.125 "traddr": "192.168.100.8", 00:30:11.125 "adrfam": "ipv4", 00:30:11.125 "trsvcid": "4420", 00:30:11.125 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:11.125 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:11.125 "hdgst": false, 00:30:11.125 "ddgst": false 00:30:11.125 }, 00:30:11.125 "method": "bdev_nvme_attach_controller" 00:30:11.125 },{ 00:30:11.125 "params": { 00:30:11.125 "name": "Nvme9", 00:30:11.125 "trtype": "rdma", 00:30:11.125 "traddr": "192.168.100.8", 00:30:11.125 "adrfam": "ipv4", 00:30:11.125 "trsvcid": "4420", 00:30:11.125 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:11.125 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:11.125 "hdgst": false, 00:30:11.125 "ddgst": false 00:30:11.125 }, 00:30:11.125 "method": "bdev_nvme_attach_controller" 00:30:11.125 },{ 00:30:11.125 "params": { 00:30:11.125 "name": "Nvme10", 00:30:11.125 "trtype": "rdma", 00:30:11.125 "traddr": "192.168.100.8", 00:30:11.125 "adrfam": "ipv4", 00:30:11.125 "trsvcid": "4420", 00:30:11.125 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:11.125 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:11.125 "hdgst": false, 00:30:11.125 "ddgst": false 00:30:11.125 }, 00:30:11.125 "method": "bdev_nvme_attach_controller" 00:30:11.125 }' 00:30:11.125 [2024-12-05 14:01:10.924007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.125 [2024-12-05 14:01:10.945263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.062 Running I/O for 10 seconds... 00:30:12.062 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.062 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:30:12.062 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:12.062 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.062 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.322 14:01:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:12.322 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.322 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=19 00:30:12.322 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 19 -ge 100 ']' 00:30:12.322 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:12.581 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:12.581 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:12.581 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:12.581 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:12.581 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.581 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=171 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 171 -ge 100 ']' 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1844578 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1844578 ']' 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1844578 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1844578 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1844578' 00:30:12.840 killing process with pid 1844578 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1844578 00:30:12.840 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1844578 00:30:13.358 2684.00 IOPS, 167.75 MiB/s [2024-12-05T13:01:13.211Z] 14:01:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:13.941 [2024-12-05 14:01:13.553429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.941 [2024-12-05 14:01:13.553468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:cff200 sqhd:6de0 p:0 m:0 dnr:0 00:30:13.941 [2024-12-05 14:01:13.553478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.941 [2024-12-05 14:01:13.553484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:cff200 sqhd:6de0 p:0 m:0 dnr:0 00:30:13.941 [2024-12-05 14:01:13.553507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.941 [2024-12-05 14:01:13.553513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:cff200 sqhd:6de0 p:0 m:0 dnr:0 00:30:13.941 [2024-12-05 14:01:13.553520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.941 [2024-12-05 14:01:13.553525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:cff200 sqhd:6de0 p:0 m:0 dnr:0 00:30:13.941 [2024-12-05 14:01:13.555931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.941 [2024-12-05 14:01:13.555970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:13.942 [2024-12-05 14:01:13.556022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.556047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.556070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.556092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.556115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.556136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.556159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.556180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.558436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.942 [2024-12-05 14:01:13.558471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:30:13.942 [2024-12-05 14:01:13.558514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.558538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.558561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.558589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.558596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.558602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.558608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.558614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.560903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.942 [2024-12-05 14:01:13.560937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:13.942 [2024-12-05 14:01:13.560976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.560999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.561022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.561043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.561066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.561087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.561109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.561130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.563340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.942 [2024-12-05 14:01:13.563386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:30:13.942 [2024-12-05 14:01:13.563431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.563456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.563479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.563500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.563523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.563556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.563562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.563568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.566031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.942 [2024-12-05 14:01:13.566067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:13.942 [2024-12-05 14:01:13.566107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.566129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.566153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.566175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.566196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.566217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.566240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.566261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.568713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.942 [2024-12-05 14:01:13.568745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:13.942 [2024-12-05 14:01:13.568786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.568810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.568833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.568854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.568877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.568898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.568920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.568942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.571472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.942 [2024-12-05 14:01:13.571491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:13.942 [2024-12-05 14:01:13.571515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.571528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.571541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.571553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.571570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.571582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.571594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.571606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.573978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.942 [2024-12-05 14:01:13.574010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:13.942 [2024-12-05 14:01:13.574047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.942 [2024-12-05 14:01:13.574071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.942 [2024-12-05 14:01:13.574093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.943 [2024-12-05 14:01:13.574114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.574136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.943 [2024-12-05 14:01:13.574157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.574180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.943 [2024-12-05 14:01:13.574201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.576998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.943 [2024-12-05 14:01:13.577029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:30:13.943 [2024-12-05 14:01:13.577069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.943 [2024-12-05 14:01:13.577093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.577116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.943 [2024-12-05 14:01:13.577136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.577159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.943 [2024-12-05 14:01:13.577180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.577202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.943 [2024-12-05 14:01:13.577223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32711 cdw0:1 sqhd:c990 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.579766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.943 [2024-12-05 14:01:13.579806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:13.943 [2024-12-05 14:01:13.582292] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.584887] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.587230] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.589162] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.591599] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.593831] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.596050] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.598334] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.600556] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:30:13.943 [2024-12-05 14:01:13.600726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.600749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.600782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.600797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.600816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.600829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.600848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.600861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.600880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.600892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.600911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.600924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.600943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.600959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.600978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.600991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f980 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.601022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.601053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.601084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.601115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.601146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f700 len:0x10000 key:0x183b00 00:30:13.943 [2024-12-05 14:01:13.601176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x184500 00:30:13.943 [2024-12-05 14:01:13.601208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x184500 00:30:13.943 [2024-12-05 14:01:13.601239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x184500 00:30:13.943 [2024-12-05 14:01:13.601270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x184500 00:30:13.943 [2024-12-05 14:01:13.601301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x184500 00:30:13.943 [2024-12-05 14:01:13.601335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x184500 00:30:13.943 [2024-12-05 14:01:13.601366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x184500 00:30:13.943 [2024-12-05 14:01:13.601427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.943 [2024-12-05 14:01:13.601446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x184500 00:30:13.943 [2024-12-05 14:01:13.601459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x184500 00:30:13.944 [2024-12-05 14:01:13.601490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001998fd00 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fc80 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996fc00 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.601979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.601991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001980f100 len:0x10000 key:0x183a00 00:30:13.944 [2024-12-05 14:01:13.602475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.944 [2024-12-05 14:01:13.602494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x184600 00:30:13.944 [2024-12-05 14:01:13.602507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x184600 00:30:13.945 [2024-12-05 14:01:13.602539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x184600 00:30:13.945 [2024-12-05 14:01:13.602570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x184600 00:30:13.945 [2024-12-05 14:01:13.602601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bafe00 len:0x10000 key:0x184600 00:30:13.945 [2024-12-05 14:01:13.602633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b9fd80 len:0x10000 key:0x184600 00:30:13.945 [2024-12-05 14:01:13.602664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b8fd00 len:0x10000 key:0x184600 00:30:13.945 [2024-12-05 14:01:13.602695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b7fc80 len:0x10000 key:0x184600 00:30:13.945 [2024-12-05 14:01:13.602726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b6fc00 len:0x10000 key:0x184600 00:30:13.945 [2024-12-05 14:01:13.602757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.602778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x183b00 00:30:13.945 [2024-12-05 14:01:13.602791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d445000 sqhd:7210 p:0 m:0 dnr:0 00:30:13.945 [2024-12-05 14:01:13.625666] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625750] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625765] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625778] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625789] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625800] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625812] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625823] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625835] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625847] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.625871] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.638475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.638508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.638521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.638569] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.638585] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.638597] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.638609] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.638620] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.638637] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.638648] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:30:13.945 [2024-12-05 14:01:13.638997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.639016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.639034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.641359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:30:13.945 task offset: 37888 on job bdev=Nvme1n1 fails 00:30:13.945 00:30:13.945 Latency(us) 00:30:13.945 [2024-12-05T13:01:13.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.945 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme1n1 ended in about 1.84 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme1n1 : 1.84 148.20 9.26 34.87 0.00 346897.43 6043.88 1043915.66 00:30:13.945 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme2n1 ended in about 1.84 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme2n1 : 1.84 148.07 9.25 34.84 0.00 344311.64 5315.70 1043915.66 00:30:13.945 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme3n1 ended in about 1.84 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme3n1 : 1.84 155.57 9.72 34.81 0.00 328046.24 10291.58 1043915.66 00:30:13.945 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme4n1 ended in about 1.84 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme4n1 : 1.84 154.89 9.68 34.78 0.00 326762.93 17185.00 1043915.66 00:30:13.945 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme5n1 ended in about 1.84 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme5n1 : 1.84 143.36 8.96 34.75 0.00 344914.13 24855.13 1043915.66 00:30:13.945 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme6n1 ended in about 1.84 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme6n1 : 1.84 156.26 9.77 34.73 0.00 318912.08 26214.40 1043915.66 00:30:13.945 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme7n1 ended in about 1.84 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme7n1 : 1.84 154.51 9.66 34.70 0.00 319176.66 34952.53 1043915.66 00:30:13.945 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme8n1 ended in about 1.85 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme8n1 : 1.85 153.86 9.62 34.67 0.00 317745.26 41748.86 1037701.88 00:30:13.945 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme9n1 ended in about 1.85 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme9n1 : 1.85 138.62 8.66 34.66 0.00 343010.27 48545.19 1037701.88 00:30:13.945 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.945 Job: Nvme10n1 ended in about 1.81 seconds with error 00:30:13.945 Verification LBA range: start 0x0 length 0x400 00:30:13.945 Nvme10n1 : 1.81 105.96 6.62 35.32 0.00 418328.27 55535.69 1068770.80 00:30:13.945 [2024-12-05T13:01:13.798Z] =================================================================================================================== 00:30:13.945 [2024-12-05T13:01:13.798Z] Total : 1459.30 91.21 348.13 0.00 338542.39 5315.70 1068770.80 00:30:13.945 [2024-12-05 14:01:13.662280] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:13.945 [2024-12-05 14:01:13.662304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.662317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.662330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:30:13.945 [2024-12-05 14:01:13.671695] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.945 [2024-12-05 14:01:13.671748] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.671770] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:30:13.946 [2024-12-05 14:01:13.671942] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.671970] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.671988] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e5300 00:30:13.946 [2024-12-05 14:01:13.672094] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.672119] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.672136] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d9c80 00:30:13.946 [2024-12-05 14:01:13.677935] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.677981] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.678001] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d2900 00:30:13.946 [2024-12-05 14:01:13.678155] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.678170] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.678179] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170c6340 00:30:13.946 [2024-12-05 14:01:13.678261] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.678275] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.678284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf1c0 00:30:13.946 [2024-12-05 14:01:13.679189] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.679221] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.679238] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170a8500 00:30:13.946 [2024-12-05 14:01:13.679351] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.679387] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.679404] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170c5040 00:30:13.946 [2024-12-05 14:01:13.679518] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.679543] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.679559] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001709b1c0 00:30:13.946 [2024-12-05 14:01:13.679658] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:13.946 [2024-12-05 14:01:13.679687] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:13.946 [2024-12-05 14:01:13.679704] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001708e080 00:30:14.203 14:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1844702 00:30:14.203 14:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:30:14.203 14:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1844702 00:30:14.203 14:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:14.203 14:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.203 14:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:30:14.203 14:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.203 14:01:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1844702 00:30:15.141 [2024-12-05 14:01:14.676336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.676396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.678070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.678105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.679557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.679589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.679685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:15.141 [2024-12-05 14:01:14.679708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:15.141 [2024-12-05 14:01:14.679730] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:30:15.141 [2024-12-05 14:01:14.679756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:15.141 [2024-12-05 14:01:14.679787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:15.141 [2024-12-05 14:01:14.679807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:15.141 [2024-12-05 14:01:14.679826] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:30:15.141 [2024-12-05 14:01:14.679847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:15.141 [2024-12-05 14:01:14.679873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:15.141 [2024-12-05 14:01:14.679893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:15.141 [2024-12-05 14:01:14.679911] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:30:15.141 [2024-12-05 14:01:14.679931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:15.141 [2024-12-05 14:01:14.681949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.681985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.683298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.683330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.684737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.684770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.686162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.686194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.687396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.687429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.688695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.688728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.690186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:15.141 [2024-12-05 14:01:14.690218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:15.141 [2024-12-05 14:01:14.690237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:30:15.141 [2024-12-05 14:01:14.690257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:30:15.141 [2024-12-05 14:01:14.690276] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:30:15.141 [2024-12-05 14:01:14.690298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:30:15.141 [2024-12-05 14:01:14.690326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:30:15.141 [2024-12-05 14:01:14.690345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:30:15.141 [2024-12-05 14:01:14.690365] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:30:15.141 [2024-12-05 14:01:14.690395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:30:15.141 [2024-12-05 14:01:14.690423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:30:15.141 [2024-12-05 14:01:14.690442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:30:15.141 [2024-12-05 14:01:14.690461] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:30:15.141 [2024-12-05 14:01:14.690482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:30:15.141 [2024-12-05 14:01:14.690727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:30:15.141 [2024-12-05 14:01:14.690743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:30:15.141 [2024-12-05 14:01:14.690759] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:30:15.141 [2024-12-05 14:01:14.690771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:30:15.142 [2024-12-05 14:01:14.690786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:30:15.142 [2024-12-05 14:01:14.690797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:30:15.142 [2024-12-05 14:01:14.690807] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:30:15.142 [2024-12-05 14:01:14.690819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:30:15.142 [2024-12-05 14:01:14.690833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:30:15.142 [2024-12-05 14:01:14.690844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:30:15.142 [2024-12-05 14:01:14.690854] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:30:15.142 [2024-12-05 14:01:14.690865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:30:15.142 [2024-12-05 14:01:14.690880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:30:15.142 [2024-12-05 14:01:14.690891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:30:15.142 [2024-12-05 14:01:14.690902] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:30:15.142 [2024-12-05 14:01:14.690913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:15.142 rmmod nvme_rdma 00:30:15.142 rmmod nvme_fabrics 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1844578 ']' 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1844578 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1844578 ']' 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1844578 00:30:15.142 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1844578) - No such process 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1844578 is not found' 00:30:15.142 Process with pid 1844578 is not found 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:15.142 00:30:15.142 real 0m5.272s 00:30:15.142 user 0m15.383s 00:30:15.142 sys 0m1.078s 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:15.142 ************************************ 00:30:15.142 END TEST nvmf_shutdown_tc3 00:30:15.142 ************************************ 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:15.142 ************************************ 00:30:15.142 START TEST nvmf_shutdown_tc4 00:30:15.142 ************************************ 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.142 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:15.143 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:15.143 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:15.403 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:15.403 Found net devices under 0000:18:00.0: mlx_0_0 00:30:15.403 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:15.404 Found net devices under 0000:18:00.1: mlx_0_1 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:15.404 14:01:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:15.404 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:15.404 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:30:15.404 altname enp24s0f0np0 00:30:15.404 altname ens785f0np0 00:30:15.404 inet 192.168.100.8/24 scope global mlx_0_0 00:30:15.404 valid_lft forever preferred_lft forever 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:15.404 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:15.404 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:30:15.404 altname enp24s0f1np1 00:30:15.404 altname ens785f1np1 00:30:15.404 inet 192.168.100.9/24 scope global mlx_0_1 00:30:15.404 valid_lft forever preferred_lft forever 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:15.404 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:15.405 192.168.100.9' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:15.405 192.168.100.9' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:15.405 192.168.100.9' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1845547 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1845547 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1845547 ']' 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.405 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.405 [2024-12-05 14:01:15.247254] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:15.405 [2024-12-05 14:01:15.247297] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.664 [2024-12-05 14:01:15.321948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:15.664 [2024-12-05 14:01:15.343611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.664 [2024-12-05 14:01:15.343649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.664 [2024-12-05 14:01:15.343656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.664 [2024-12-05 14:01:15.343661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.664 [2024-12-05 14:01:15.343669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.664 [2024-12-05 14:01:15.345074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.664 [2024-12-05 14:01:15.345183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.664 [2024-12-05 14:01:15.345265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.664 [2024-12-05 14:01:15.345265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.664 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.664 [2024-12-05 14:01:15.503641] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24aa230/0x24ae720) succeed. 00:30:15.664 [2024-12-05 14:01:15.511946] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24ab8c0/0x24efdc0) succeed. 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.923 14:01:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:15.923 Malloc1 00:30:15.923 [2024-12-05 14:01:15.727488] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:15.923 Malloc2 00:30:16.181 Malloc3 00:30:16.181 Malloc4 00:30:16.181 Malloc5 00:30:16.181 Malloc6 00:30:16.181 Malloc7 00:30:16.181 Malloc8 00:30:16.440 Malloc9 00:30:16.440 Malloc10 00:30:16.440 14:01:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.440 14:01:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:16.440 14:01:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:16.440 14:01:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:16.440 14:01:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1845839 00:30:16.440 14:01:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:16.440 14:01:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:30:16.440 [2024-12-05 14:01:16.255537] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1845547 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1845547 ']' 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1845547 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1845547 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1845547' 00:30:21.870 killing process with pid 1845547 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1845547 00:30:21.870 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1845547 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 starting I/O failed: -6 00:30:21.870 starting I/O failed: -6 00:30:21.870 starting I/O failed: -6 00:30:21.870 starting I/O failed: -6 00:30:21.870 starting I/O failed: -6 00:30:21.870 starting I/O failed: -6 00:30:21.870 starting I/O failed: -6 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 NVMe io qpair process completion error 00:30:21.870 NVMe io qpair process completion error 00:30:22.130 14:01:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 starting I/O failed: -6 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 [2024-12-05 14:01:22.326507] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.701 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 starting I/O failed: -6 00:30:22.702 [2024-12-05 14:01:22.337151] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.702 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 [2024-12-05 14:01:22.348755] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 starting I/O failed: -6 00:30:22.703 [2024-12-05 14:01:22.359895] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.703 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 [2024-12-05 14:01:22.371653] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.704 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 [2024-12-05 14:01:22.384085] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 Write completed with error (sct=0, sc=8) 00:30:22.705 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 [2024-12-05 14:01:22.396704] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 starting I/O failed: -6 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 [2024-12-05 14:01:22.409321] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.706 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 Write completed with error (sct=0, sc=8) 00:30:22.707 NVMe io qpair process completion error 00:30:22.707 NVMe io qpair process completion error 00:30:22.965 14:01:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1845839 00:30:22.965 14:01:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:30:22.965 14:01:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1845839 00:30:22.965 14:01:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:22.965 14:01:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.966 14:01:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:30:22.966 14:01:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.966 14:01:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1845839 00:30:23.904 [2024-12-05 14:01:23.413010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.904 [2024-12-05 14:01:23.413068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:23.904 [2024-12-05 14:01:23.415431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.904 [2024-12-05 14:01:23.415469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:23.904 [2024-12-05 14:01:23.417139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.904 [2024-12-05 14:01:23.417173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:30:23.904 [2024-12-05 14:01:23.419367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.904 [2024-12-05 14:01:23.419416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 [2024-12-05 14:01:23.421308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.904 [2024-12-05 14:01:23.421341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 [2024-12-05 14:01:23.423473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.904 [2024-12-05 14:01:23.423505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 [2024-12-05 14:01:23.425555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.904 [2024-12-05 14:01:23.425588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 [2024-12-05 14:01:23.432415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 [2024-12-05 14:01:23.432453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.904 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 [2024-12-05 14:01:23.435145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.905 [2024-12-05 14:01:23.435178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 Write completed with error (sct=0, sc=8) 00:30:23.905 [2024-12-05 14:01:23.473855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:30:23.905 [2024-12-05 14:01:23.473911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:23.905 Initializing NVMe Controllers 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:30:23.905 Controller IO queue size 128, less than required. 00:30:23.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:23.905 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:23.905 Initialization complete. Launching workers. 00:30:23.905 ======================================================== 00:30:23.905 Latency(us) 00:30:23.905 Device Information : IOPS MiB/s Average min max 00:30:23.905 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1568.48 67.40 80954.57 20391.76 1267721.14 00:30:23.905 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1579.17 67.85 79923.53 109.97 1188407.23 00:30:23.905 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1586.35 68.16 79668.25 125.62 1167841.30 00:30:23.905 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1562.80 67.15 80989.85 19747.08 1237770.95 00:30:23.905 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1579.50 67.87 80240.65 98.56 1198097.43 00:30:23.905 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1573.49 67.61 80672.00 107.39 1223671.76 00:30:23.906 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1583.51 68.04 79522.35 113.82 1198751.81 00:30:23.906 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1596.04 68.58 93451.59 96.24 2151154.12 00:30:23.906 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1547.09 66.48 82257.38 26648.58 1277614.09 00:30:23.906 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1583.68 68.05 94197.93 116.04 2206145.97 00:30:23.906 ======================================================== 00:30:23.906 Total : 15760.13 677.19 83205.87 96.24 2206145.97 00:30:23.906 00:30:23.906 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:23.906 rmmod nvme_rdma 00:30:23.906 rmmod nvme_fabrics 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1845547 ']' 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1845547 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1845547 ']' 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1845547 00:30:23.906 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1845547) - No such process 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1845547 is not found' 00:30:23.906 Process with pid 1845547 is not found 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:23.906 00:30:23.906 real 0m8.591s 00:30:23.906 user 0m32.026s 00:30:23.906 sys 0m1.097s 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:23.906 ************************************ 00:30:23.906 END TEST nvmf_shutdown_tc4 00:30:23.906 ************************************ 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:23.906 00:30:23.906 real 0m31.109s 00:30:23.906 user 1m34.008s 00:30:23.906 sys 0m8.987s 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:23.906 ************************************ 00:30:23.906 END TEST nvmf_shutdown 00:30:23.906 ************************************ 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:23.906 ************************************ 00:30:23.906 START TEST nvmf_nsid 00:30:23.906 ************************************ 00:30:23.906 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:30:24.166 * Looking for test storage... 00:30:24.166 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:24.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.166 --rc genhtml_branch_coverage=1 00:30:24.166 --rc genhtml_function_coverage=1 00:30:24.166 --rc genhtml_legend=1 00:30:24.166 --rc geninfo_all_blocks=1 00:30:24.166 --rc geninfo_unexecuted_blocks=1 00:30:24.166 00:30:24.166 ' 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:24.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.166 --rc genhtml_branch_coverage=1 00:30:24.166 --rc genhtml_function_coverage=1 00:30:24.166 --rc genhtml_legend=1 00:30:24.166 --rc geninfo_all_blocks=1 00:30:24.166 --rc geninfo_unexecuted_blocks=1 00:30:24.166 00:30:24.166 ' 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:24.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.166 --rc genhtml_branch_coverage=1 00:30:24.166 --rc genhtml_function_coverage=1 00:30:24.166 --rc genhtml_legend=1 00:30:24.166 --rc geninfo_all_blocks=1 00:30:24.166 --rc geninfo_unexecuted_blocks=1 00:30:24.166 00:30:24.166 ' 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:24.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.166 --rc genhtml_branch_coverage=1 00:30:24.166 --rc genhtml_function_coverage=1 00:30:24.166 --rc genhtml_legend=1 00:30:24.166 --rc geninfo_all_blocks=1 00:30:24.166 --rc geninfo_unexecuted_blocks=1 00:30:24.166 00:30:24.166 ' 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.166 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:24.167 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:24.167 14:01:23 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:30.727 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:30.727 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:30.727 Found net devices under 0000:18:00.0: mlx_0_0 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:30.727 Found net devices under 0000:18:00.1: mlx_0_1 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:30.727 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:30.728 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:30.728 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:30:30.728 altname enp24s0f0np0 00:30:30.728 altname ens785f0np0 00:30:30.728 inet 192.168.100.8/24 scope global mlx_0_0 00:30:30.728 valid_lft forever preferred_lft forever 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:30.728 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:30.728 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:30:30.728 altname enp24s0f1np1 00:30:30.728 altname ens785f1np1 00:30:30.728 inet 192.168.100.9/24 scope global mlx_0_1 00:30:30.728 valid_lft forever preferred_lft forever 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:30.728 192.168.100.9' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:30.728 192.168.100.9' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:30.728 192.168.100.9' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1850426 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1850426 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1850426 ']' 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.728 14:01:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:30.728 [2024-12-05 14:01:29.946397] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:30.728 [2024-12-05 14:01:29.946436] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.729 [2024-12-05 14:01:30.015834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.729 [2024-12-05 14:01:30.036833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.729 [2024-12-05 14:01:30.036871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.729 [2024-12-05 14:01:30.036878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.729 [2024-12-05 14:01:30.036884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.729 [2024-12-05 14:01:30.036889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.729 [2024-12-05 14:01:30.037351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1850448 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c53567de-5ea3-456e-b5d6-d2a3f4627c8b 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=2fee56bf-828c-4db6-aa03-e6af3158d942 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ad06b482-6dae-4dab-b4f7-c161a7bee148 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:30.729 null0 00:30:30.729 null1 00:30:30.729 [2024-12-05 14:01:30.221065] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:30.729 [2024-12-05 14:01:30.221105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850448 ] 00:30:30.729 null2 00:30:30.729 [2024-12-05 14:01:30.255342] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23388b0/0x2349950) succeed. 00:30:30.729 [2024-12-05 14:01:30.264734] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2339d60/0x23c99c0) succeed. 00:30:30.729 [2024-12-05 14:01:30.294588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.729 [2024-12-05 14:01:30.313305] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:30.729 [2024-12-05 14:01:30.316241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1850448 /var/tmp/tgt2.sock 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1850448 ']' 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:30.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:30.729 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:31.296 [2024-12-05 14:01:30.881965] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1136eb0/0xec5ff0) succeed. 00:30:31.296 [2024-12-05 14:01:30.890930] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10b2700/0xf07690) succeed. 00:30:31.296 [2024-12-05 14:01:30.931793] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:31.296 nvme0n1 nvme0n2 00:30:31.296 nvme1n1 00:30:31.296 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:31.296 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:31.296 14:01:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c53567de-5ea3-456e-b5d6-d2a3f4627c8b 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c53567de5ea3456eb5d6d2a3f4627c8b 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C53567DE5EA3456EB5D6D2A3F4627C8B 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C53567DE5EA3456EB5D6D2A3F4627C8B == \C\5\3\5\6\7\D\E\5\E\A\3\4\5\6\E\B\5\D\6\D\2\A\3\F\4\6\2\7\C\8\B ]] 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 2fee56bf-828c-4db6-aa03-e6af3158d942 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2fee56bf828c4db6aa03e6af3158d942 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2FEE56BF828C4DB6AA03E6AF3158D942 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 2FEE56BF828C4DB6AA03E6AF3158D942 == \2\F\E\E\5\6\B\F\8\2\8\C\4\D\B\6\A\A\0\3\E\6\A\F\3\1\5\8\D\9\4\2 ]] 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ad06b482-6dae-4dab-b4f7-c161a7bee148 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:39.415 14:01:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:39.415 14:01:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ad06b4826dae4dabb4f7c161a7bee148 00:30:39.415 14:01:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AD06B4826DAE4DABB4F7C161A7BEE148 00:30:39.415 14:01:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ AD06B4826DAE4DABB4F7C161A7BEE148 == \A\D\0\6\B\4\8\2\6\D\A\E\4\D\A\B\B\4\F\7\C\1\6\1\A\7\B\E\E\1\4\8 ]] 00:30:39.415 14:01:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1850448 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1850448 ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1850448 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850448 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850448' 00:30:45.984 killing process with pid 1850448 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1850448 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1850448 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:45.984 rmmod nvme_rdma 00:30:45.984 rmmod nvme_fabrics 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1850426 ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1850426 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1850426 ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1850426 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850426 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850426' 00:30:45.984 killing process with pid 1850426 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1850426 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1850426 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:45.984 00:30:45.984 real 0m22.043s 00:30:45.984 user 0m32.477s 00:30:45.984 sys 0m5.755s 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:45.984 ************************************ 00:30:45.984 END TEST nvmf_nsid 00:30:45.984 ************************************ 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:45.984 00:30:45.984 real 15m17.593s 00:30:45.984 user 47m9.748s 00:30:45.984 sys 2m49.729s 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.984 14:01:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:45.984 ************************************ 00:30:45.984 END TEST nvmf_target_extra 00:30:45.984 ************************************ 00:30:45.984 14:01:45 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:30:45.984 14:01:45 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:45.984 14:01:45 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.984 14:01:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:45.984 ************************************ 00:30:45.984 START TEST nvmf_host 00:30:45.984 ************************************ 00:30:45.984 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:30:46.243 * Looking for test storage... 00:30:46.243 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:46.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.243 --rc genhtml_branch_coverage=1 00:30:46.243 --rc genhtml_function_coverage=1 00:30:46.243 --rc genhtml_legend=1 00:30:46.243 --rc geninfo_all_blocks=1 00:30:46.243 --rc geninfo_unexecuted_blocks=1 00:30:46.243 00:30:46.243 ' 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:46.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.243 --rc genhtml_branch_coverage=1 00:30:46.243 --rc genhtml_function_coverage=1 00:30:46.243 --rc genhtml_legend=1 00:30:46.243 --rc geninfo_all_blocks=1 00:30:46.243 --rc geninfo_unexecuted_blocks=1 00:30:46.243 00:30:46.243 ' 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:46.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.243 --rc genhtml_branch_coverage=1 00:30:46.243 --rc genhtml_function_coverage=1 00:30:46.243 --rc genhtml_legend=1 00:30:46.243 --rc geninfo_all_blocks=1 00:30:46.243 --rc geninfo_unexecuted_blocks=1 00:30:46.243 00:30:46.243 ' 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:46.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.243 --rc genhtml_branch_coverage=1 00:30:46.243 --rc genhtml_function_coverage=1 00:30:46.243 --rc genhtml_legend=1 00:30:46.243 --rc geninfo_all_blocks=1 00:30:46.243 --rc geninfo_unexecuted_blocks=1 00:30:46.243 00:30:46.243 ' 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.243 14:01:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.243 14:01:46 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:46.244 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.244 ************************************ 00:30:46.244 START TEST nvmf_multicontroller 00:30:46.244 ************************************ 00:30:46.244 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:30:46.503 * Looking for test storage... 00:30:46.503 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.503 --rc genhtml_branch_coverage=1 00:30:46.503 --rc genhtml_function_coverage=1 00:30:46.503 --rc genhtml_legend=1 00:30:46.503 --rc geninfo_all_blocks=1 00:30:46.503 --rc geninfo_unexecuted_blocks=1 00:30:46.503 00:30:46.503 ' 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.503 --rc genhtml_branch_coverage=1 00:30:46.503 --rc genhtml_function_coverage=1 00:30:46.503 --rc genhtml_legend=1 00:30:46.503 --rc geninfo_all_blocks=1 00:30:46.503 --rc geninfo_unexecuted_blocks=1 00:30:46.503 00:30:46.503 ' 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.503 --rc genhtml_branch_coverage=1 00:30:46.503 --rc genhtml_function_coverage=1 00:30:46.503 --rc genhtml_legend=1 00:30:46.503 --rc geninfo_all_blocks=1 00:30:46.503 --rc geninfo_unexecuted_blocks=1 00:30:46.503 00:30:46.503 ' 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:46.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.503 --rc genhtml_branch_coverage=1 00:30:46.503 --rc genhtml_function_coverage=1 00:30:46.503 --rc genhtml_legend=1 00:30:46.503 --rc geninfo_all_blocks=1 00:30:46.503 --rc geninfo_unexecuted_blocks=1 00:30:46.503 00:30:46.503 ' 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.503 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:46.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:46.504 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:30:46.504 00:30:46.504 real 0m0.201s 00:30:46.504 user 0m0.128s 00:30:46.504 sys 0m0.087s 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:46.504 ************************************ 00:30:46.504 END TEST nvmf_multicontroller 00:30:46.504 ************************************ 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.504 ************************************ 00:30:46.504 START TEST nvmf_aer 00:30:46.504 ************************************ 00:30:46.504 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:30:46.763 * Looking for test storage... 00:30:46.763 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.763 --rc genhtml_branch_coverage=1 00:30:46.763 --rc genhtml_function_coverage=1 00:30:46.763 --rc genhtml_legend=1 00:30:46.763 --rc geninfo_all_blocks=1 00:30:46.763 --rc geninfo_unexecuted_blocks=1 00:30:46.763 00:30:46.763 ' 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.763 --rc genhtml_branch_coverage=1 00:30:46.763 --rc genhtml_function_coverage=1 00:30:46.763 --rc genhtml_legend=1 00:30:46.763 --rc geninfo_all_blocks=1 00:30:46.763 --rc geninfo_unexecuted_blocks=1 00:30:46.763 00:30:46.763 ' 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.763 --rc genhtml_branch_coverage=1 00:30:46.763 --rc genhtml_function_coverage=1 00:30:46.763 --rc genhtml_legend=1 00:30:46.763 --rc geninfo_all_blocks=1 00:30:46.763 --rc geninfo_unexecuted_blocks=1 00:30:46.763 00:30:46.763 ' 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.763 --rc genhtml_branch_coverage=1 00:30:46.763 --rc genhtml_function_coverage=1 00:30:46.763 --rc genhtml_legend=1 00:30:46.763 --rc geninfo_all_blocks=1 00:30:46.763 --rc geninfo_unexecuted_blocks=1 00:30:46.763 00:30:46.763 ' 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.763 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:46.764 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.764 14:01:46 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.385 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:53.386 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:53.386 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:53.386 Found net devices under 0000:18:00.0: mlx_0_0 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:53.386 Found net devices under 0000:18:00.1: mlx_0_1 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:53.386 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:53.386 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:30:53.386 altname enp24s0f0np0 00:30:53.386 altname ens785f0np0 00:30:53.386 inet 192.168.100.8/24 scope global mlx_0_0 00:30:53.386 valid_lft forever preferred_lft forever 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:53.386 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:53.386 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:30:53.386 altname enp24s0f1np1 00:30:53.386 altname ens785f1np1 00:30:53.386 inet 192.168.100.9/24 scope global mlx_0_1 00:30:53.386 valid_lft forever preferred_lft forever 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:53.386 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:53.387 192.168.100.9' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:53.387 192.168.100.9' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:53.387 192.168.100.9' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1856756 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1856756 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1856756 ']' 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 [2024-12-05 14:01:52.473278] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:53.387 [2024-12-05 14:01:52.473320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.387 [2024-12-05 14:01:52.547592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:53.387 [2024-12-05 14:01:52.570283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.387 [2024-12-05 14:01:52.570322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.387 [2024-12-05 14:01:52.570328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.387 [2024-12-05 14:01:52.570333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.387 [2024-12-05 14:01:52.570338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.387 [2024-12-05 14:01:52.571696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.387 [2024-12-05 14:01:52.571808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.387 [2024-12-05 14:01:52.571914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.387 [2024-12-05 14:01:52.571915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 [2024-12-05 14:01:52.719071] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcccf30/0xcd1420) succeed. 00:30:53.387 [2024-12-05 14:01:52.727661] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcce5c0/0xd12ac0) succeed. 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 Malloc0 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 [2024-12-05 14:01:52.895422] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.387 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.387 [ 00:30:53.387 { 00:30:53.387 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:53.387 "subtype": "Discovery", 00:30:53.387 "listen_addresses": [], 00:30:53.387 "allow_any_host": true, 00:30:53.387 "hosts": [] 00:30:53.387 }, 00:30:53.387 { 00:30:53.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.387 "subtype": "NVMe", 00:30:53.387 "listen_addresses": [ 00:30:53.387 { 00:30:53.387 "trtype": "RDMA", 00:30:53.387 "adrfam": "IPv4", 00:30:53.387 "traddr": "192.168.100.8", 00:30:53.387 "trsvcid": "4420" 00:30:53.387 } 00:30:53.387 ], 00:30:53.387 "allow_any_host": true, 00:30:53.387 "hosts": [], 00:30:53.387 "serial_number": "SPDK00000000000001", 00:30:53.387 "model_number": "SPDK bdev Controller", 00:30:53.387 "max_namespaces": 2, 00:30:53.387 "min_cntlid": 1, 00:30:53.387 "max_cntlid": 65519, 00:30:53.387 "namespaces": [ 00:30:53.387 { 00:30:53.387 "nsid": 1, 00:30:53.387 "bdev_name": "Malloc0", 00:30:53.387 "name": "Malloc0", 00:30:53.387 "nguid": "665FF7916E3F4029A858C3BAA4AF3DE3", 00:30:53.387 "uuid": "665ff791-6e3f-4029-a858-c3baa4af3de3" 00:30:53.387 } 00:30:53.388 ] 00:30:53.388 } 00:30:53.388 ] 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1856782 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:53.388 14:01:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.388 Malloc1 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.388 [ 00:30:53.388 { 00:30:53.388 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:53.388 "subtype": "Discovery", 00:30:53.388 "listen_addresses": [], 00:30:53.388 "allow_any_host": true, 00:30:53.388 "hosts": [] 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.388 "subtype": "NVMe", 00:30:53.388 "listen_addresses": [ 00:30:53.388 { 00:30:53.388 "trtype": "RDMA", 00:30:53.388 "adrfam": "IPv4", 00:30:53.388 "traddr": "192.168.100.8", 00:30:53.388 "trsvcid": "4420" 00:30:53.388 } 00:30:53.388 ], 00:30:53.388 "allow_any_host": true, 00:30:53.388 "hosts": [], 00:30:53.388 "serial_number": "SPDK00000000000001", 00:30:53.388 "model_number": "SPDK bdev Controller", 00:30:53.388 "max_namespaces": 2, 00:30:53.388 "min_cntlid": 1, 00:30:53.388 "max_cntlid": 65519, 00:30:53.388 "namespaces": [ 00:30:53.388 { 00:30:53.388 "nsid": 1, 00:30:53.388 "bdev_name": "Malloc0", 00:30:53.388 "name": "Malloc0", 00:30:53.388 "nguid": "665FF7916E3F4029A858C3BAA4AF3DE3", 00:30:53.388 "uuid": "665ff791-6e3f-4029-a858-c3baa4af3de3" 00:30:53.388 }, 00:30:53.388 { 00:30:53.388 "nsid": 2, 00:30:53.388 "bdev_name": "Malloc1", 00:30:53.388 "name": "Malloc1", 00:30:53.388 "nguid": "CC62CEA888E74FB99B5DADEF9079000A", 00:30:53.388 "uuid": "cc62cea8-88e7-4fb9-9b5d-adef9079000a" 00:30:53.388 } 00:30:53.388 ] 00:30:53.388 } 00:30:53.388 ] 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1856782 00:30:53.388 Asynchronous Event Request test 00:30:53.388 Attaching to 192.168.100.8 00:30:53.388 Attached to 192.168.100.8 00:30:53.388 Registering asynchronous event callbacks... 00:30:53.388 Starting namespace attribute notice tests for all controllers... 00:30:53.388 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:53.388 aer_cb - Changed Namespace 00:30:53.388 Cleaning up... 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.388 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:53.648 rmmod nvme_rdma 00:30:53.648 rmmod nvme_fabrics 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1856756 ']' 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1856756 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1856756 ']' 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1856756 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856756 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856756' 00:30:53.648 killing process with pid 1856756 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1856756 00:30:53.648 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1856756 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:53.907 00:30:53.907 real 0m7.290s 00:30:53.907 user 0m5.669s 00:30:53.907 sys 0m4.965s 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:53.907 ************************************ 00:30:53.907 END TEST nvmf_aer 00:30:53.907 ************************************ 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.907 ************************************ 00:30:53.907 START TEST nvmf_async_init 00:30:53.907 ************************************ 00:30:53.907 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:53.907 * Looking for test storage... 00:30:54.166 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.166 --rc genhtml_branch_coverage=1 00:30:54.166 --rc genhtml_function_coverage=1 00:30:54.166 --rc genhtml_legend=1 00:30:54.166 --rc geninfo_all_blocks=1 00:30:54.166 --rc geninfo_unexecuted_blocks=1 00:30:54.166 00:30:54.166 ' 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.166 --rc genhtml_branch_coverage=1 00:30:54.166 --rc genhtml_function_coverage=1 00:30:54.166 --rc genhtml_legend=1 00:30:54.166 --rc geninfo_all_blocks=1 00:30:54.166 --rc geninfo_unexecuted_blocks=1 00:30:54.166 00:30:54.166 ' 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.166 --rc genhtml_branch_coverage=1 00:30:54.166 --rc genhtml_function_coverage=1 00:30:54.166 --rc genhtml_legend=1 00:30:54.166 --rc geninfo_all_blocks=1 00:30:54.166 --rc geninfo_unexecuted_blocks=1 00:30:54.166 00:30:54.166 ' 00:30:54.166 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:54.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.167 --rc genhtml_branch_coverage=1 00:30:54.167 --rc genhtml_function_coverage=1 00:30:54.167 --rc genhtml_legend=1 00:30:54.167 --rc geninfo_all_blocks=1 00:30:54.167 --rc geninfo_unexecuted_blocks=1 00:30:54.167 00:30:54.167 ' 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:54.167 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=38b6e65e85fb472c9094895791c496c9 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.167 14:01:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:00.744 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:00.744 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:00.745 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:00.745 Found net devices under 0000:18:00.0: mlx_0_0 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:00.745 Found net devices under 0000:18:00.1: mlx_0_1 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:00.745 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:00.745 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:31:00.745 altname enp24s0f0np0 00:31:00.745 altname ens785f0np0 00:31:00.745 inet 192.168.100.8/24 scope global mlx_0_0 00:31:00.745 valid_lft forever preferred_lft forever 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:00.745 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:00.745 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:31:00.745 altname enp24s0f1np1 00:31:00.745 altname ens785f1np1 00:31:00.745 inet 192.168.100.9/24 scope global mlx_0_1 00:31:00.745 valid_lft forever preferred_lft forever 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:00.745 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:00.746 192.168.100.9' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:00.746 192.168.100.9' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:00.746 192.168.100.9' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1860289 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1860289 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1860289 ']' 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.746 14:01:59 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 [2024-12-05 14:01:59.983236] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:00.746 [2024-12-05 14:01:59.983290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.746 [2024-12-05 14:02:00.060899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.746 [2024-12-05 14:02:00.083557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.746 [2024-12-05 14:02:00.083593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.746 [2024-12-05 14:02:00.083600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.746 [2024-12-05 14:02:00.083605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.746 [2024-12-05 14:02:00.083610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.746 [2024-12-05 14:02:00.084079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 [2024-12-05 14:02:00.234728] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1594d00/0x15991f0) succeed. 00:31:00.746 [2024-12-05 14:02:00.243723] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15961b0/0x15da890) succeed. 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 null0 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 38b6e65e85fb472c9094895791c496c9 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 [2024-12-05 14:02:00.312689] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 nvme0n1 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.746 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.746 [ 00:31:00.746 { 00:31:00.746 "name": "nvme0n1", 00:31:00.746 "aliases": [ 00:31:00.746 "38b6e65e-85fb-472c-9094-895791c496c9" 00:31:00.746 ], 00:31:00.746 "product_name": "NVMe disk", 00:31:00.746 "block_size": 512, 00:31:00.746 "num_blocks": 2097152, 00:31:00.746 "uuid": "38b6e65e-85fb-472c-9094-895791c496c9", 00:31:00.746 "numa_id": 0, 00:31:00.746 "assigned_rate_limits": { 00:31:00.746 "rw_ios_per_sec": 0, 00:31:00.746 "rw_mbytes_per_sec": 0, 00:31:00.746 "r_mbytes_per_sec": 0, 00:31:00.746 "w_mbytes_per_sec": 0 00:31:00.746 }, 00:31:00.746 "claimed": false, 00:31:00.746 "zoned": false, 00:31:00.746 "supported_io_types": { 00:31:00.746 "read": true, 00:31:00.746 "write": true, 00:31:00.746 "unmap": false, 00:31:00.746 "flush": true, 00:31:00.746 "reset": true, 00:31:00.746 "nvme_admin": true, 00:31:00.746 "nvme_io": true, 00:31:00.746 "nvme_io_md": false, 00:31:00.746 "write_zeroes": true, 00:31:00.746 "zcopy": false, 00:31:00.746 "get_zone_info": false, 00:31:00.746 "zone_management": false, 00:31:00.746 "zone_append": false, 00:31:00.746 "compare": true, 00:31:00.746 "compare_and_write": true, 00:31:00.746 "abort": true, 00:31:00.746 "seek_hole": false, 00:31:00.746 "seek_data": false, 00:31:00.746 "copy": true, 00:31:00.746 "nvme_iov_md": false 00:31:00.746 }, 00:31:00.746 "memory_domains": [ 00:31:00.746 { 00:31:00.746 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:31:00.746 "dma_device_type": 0 00:31:00.746 } 00:31:00.746 ], 00:31:00.746 "driver_specific": { 00:31:00.746 "nvme": [ 00:31:00.746 { 00:31:00.746 "trid": { 00:31:00.747 "trtype": "RDMA", 00:31:00.747 "adrfam": "IPv4", 00:31:00.747 "traddr": "192.168.100.8", 00:31:00.747 "trsvcid": "4420", 00:31:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:00.747 }, 00:31:00.747 "ctrlr_data": { 00:31:00.747 "cntlid": 1, 00:31:00.747 "vendor_id": "0x8086", 00:31:00.747 "model_number": "SPDK bdev Controller", 00:31:00.747 "serial_number": "00000000000000000000", 00:31:00.747 "firmware_revision": "25.01", 00:31:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.747 "oacs": { 00:31:00.747 "security": 0, 00:31:00.747 "format": 0, 00:31:00.747 "firmware": 0, 00:31:00.747 "ns_manage": 0 00:31:00.747 }, 00:31:00.747 "multi_ctrlr": true, 00:31:00.747 "ana_reporting": false 00:31:00.747 }, 00:31:00.747 "vs": { 00:31:00.747 "nvme_version": "1.3" 00:31:00.747 }, 00:31:00.747 "ns_data": { 00:31:00.747 "id": 1, 00:31:00.747 "can_share": true 00:31:00.747 } 00:31:00.747 } 00:31:00.747 ], 00:31:00.747 "mp_policy": "active_passive" 00:31:00.747 } 00:31:00.747 } 00:31:00.747 ] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.747 [2024-12-05 14:02:00.412655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:00.747 [2024-12-05 14:02:00.437496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:31:00.747 [2024-12-05 14:02:00.457807] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.747 [ 00:31:00.747 { 00:31:00.747 "name": "nvme0n1", 00:31:00.747 "aliases": [ 00:31:00.747 "38b6e65e-85fb-472c-9094-895791c496c9" 00:31:00.747 ], 00:31:00.747 "product_name": "NVMe disk", 00:31:00.747 "block_size": 512, 00:31:00.747 "num_blocks": 2097152, 00:31:00.747 "uuid": "38b6e65e-85fb-472c-9094-895791c496c9", 00:31:00.747 "numa_id": 0, 00:31:00.747 "assigned_rate_limits": { 00:31:00.747 "rw_ios_per_sec": 0, 00:31:00.747 "rw_mbytes_per_sec": 0, 00:31:00.747 "r_mbytes_per_sec": 0, 00:31:00.747 "w_mbytes_per_sec": 0 00:31:00.747 }, 00:31:00.747 "claimed": false, 00:31:00.747 "zoned": false, 00:31:00.747 "supported_io_types": { 00:31:00.747 "read": true, 00:31:00.747 "write": true, 00:31:00.747 "unmap": false, 00:31:00.747 "flush": true, 00:31:00.747 "reset": true, 00:31:00.747 "nvme_admin": true, 00:31:00.747 "nvme_io": true, 00:31:00.747 "nvme_io_md": false, 00:31:00.747 "write_zeroes": true, 00:31:00.747 "zcopy": false, 00:31:00.747 "get_zone_info": false, 00:31:00.747 "zone_management": false, 00:31:00.747 "zone_append": false, 00:31:00.747 "compare": true, 00:31:00.747 "compare_and_write": true, 00:31:00.747 "abort": true, 00:31:00.747 "seek_hole": false, 00:31:00.747 "seek_data": false, 00:31:00.747 "copy": true, 00:31:00.747 "nvme_iov_md": false 00:31:00.747 }, 00:31:00.747 "memory_domains": [ 00:31:00.747 { 00:31:00.747 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:31:00.747 "dma_device_type": 0 00:31:00.747 } 00:31:00.747 ], 00:31:00.747 "driver_specific": { 00:31:00.747 "nvme": [ 00:31:00.747 { 00:31:00.747 "trid": { 00:31:00.747 "trtype": "RDMA", 00:31:00.747 "adrfam": "IPv4", 00:31:00.747 "traddr": "192.168.100.8", 00:31:00.747 "trsvcid": "4420", 00:31:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:00.747 }, 00:31:00.747 "ctrlr_data": { 00:31:00.747 "cntlid": 2, 00:31:00.747 "vendor_id": "0x8086", 00:31:00.747 "model_number": "SPDK bdev Controller", 00:31:00.747 "serial_number": "00000000000000000000", 00:31:00.747 "firmware_revision": "25.01", 00:31:00.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.747 "oacs": { 00:31:00.747 "security": 0, 00:31:00.747 "format": 0, 00:31:00.747 "firmware": 0, 00:31:00.747 "ns_manage": 0 00:31:00.747 }, 00:31:00.747 "multi_ctrlr": true, 00:31:00.747 "ana_reporting": false 00:31:00.747 }, 00:31:00.747 "vs": { 00:31:00.747 "nvme_version": "1.3" 00:31:00.747 }, 00:31:00.747 "ns_data": { 00:31:00.747 "id": 1, 00:31:00.747 "can_share": true 00:31:00.747 } 00:31:00.747 } 00:31:00.747 ], 00:31:00.747 "mp_policy": "active_passive" 00:31:00.747 } 00:31:00.747 } 00:31:00.747 ] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZYsWvtqxJx 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZYsWvtqxJx 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ZYsWvtqxJx 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.747 [2024-12-05 14:02:00.540069] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.747 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:00.747 [2024-12-05 14:02:00.560126] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:01.007 nvme0n1 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.007 [ 00:31:01.007 { 00:31:01.007 "name": "nvme0n1", 00:31:01.007 "aliases": [ 00:31:01.007 "38b6e65e-85fb-472c-9094-895791c496c9" 00:31:01.007 ], 00:31:01.007 "product_name": "NVMe disk", 00:31:01.007 "block_size": 512, 00:31:01.007 "num_blocks": 2097152, 00:31:01.007 "uuid": "38b6e65e-85fb-472c-9094-895791c496c9", 00:31:01.007 "numa_id": 0, 00:31:01.007 "assigned_rate_limits": { 00:31:01.007 "rw_ios_per_sec": 0, 00:31:01.007 "rw_mbytes_per_sec": 0, 00:31:01.007 "r_mbytes_per_sec": 0, 00:31:01.007 "w_mbytes_per_sec": 0 00:31:01.007 }, 00:31:01.007 "claimed": false, 00:31:01.007 "zoned": false, 00:31:01.007 "supported_io_types": { 00:31:01.007 "read": true, 00:31:01.007 "write": true, 00:31:01.007 "unmap": false, 00:31:01.007 "flush": true, 00:31:01.007 "reset": true, 00:31:01.007 "nvme_admin": true, 00:31:01.007 "nvme_io": true, 00:31:01.007 "nvme_io_md": false, 00:31:01.007 "write_zeroes": true, 00:31:01.007 "zcopy": false, 00:31:01.007 "get_zone_info": false, 00:31:01.007 "zone_management": false, 00:31:01.007 "zone_append": false, 00:31:01.007 "compare": true, 00:31:01.007 "compare_and_write": true, 00:31:01.007 "abort": true, 00:31:01.007 "seek_hole": false, 00:31:01.007 "seek_data": false, 00:31:01.007 "copy": true, 00:31:01.007 "nvme_iov_md": false 00:31:01.007 }, 00:31:01.007 "memory_domains": [ 00:31:01.007 { 00:31:01.007 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:31:01.007 "dma_device_type": 0 00:31:01.007 } 00:31:01.007 ], 00:31:01.007 "driver_specific": { 00:31:01.007 "nvme": [ 00:31:01.007 { 00:31:01.007 "trid": { 00:31:01.007 "trtype": "RDMA", 00:31:01.007 "adrfam": "IPv4", 00:31:01.007 "traddr": "192.168.100.8", 00:31:01.007 "trsvcid": "4421", 00:31:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:01.007 }, 00:31:01.007 "ctrlr_data": { 00:31:01.007 "cntlid": 3, 00:31:01.007 "vendor_id": "0x8086", 00:31:01.007 "model_number": "SPDK bdev Controller", 00:31:01.007 "serial_number": "00000000000000000000", 00:31:01.007 "firmware_revision": "25.01", 00:31:01.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.007 "oacs": { 00:31:01.007 "security": 0, 00:31:01.007 "format": 0, 00:31:01.007 "firmware": 0, 00:31:01.007 "ns_manage": 0 00:31:01.007 }, 00:31:01.007 "multi_ctrlr": true, 00:31:01.007 "ana_reporting": false 00:31:01.007 }, 00:31:01.007 "vs": { 00:31:01.007 "nvme_version": "1.3" 00:31:01.007 }, 00:31:01.007 "ns_data": { 00:31:01.007 "id": 1, 00:31:01.007 "can_share": true 00:31:01.007 } 00:31:01.007 } 00:31:01.007 ], 00:31:01.007 "mp_policy": "active_passive" 00:31:01.007 } 00:31:01.007 } 00:31:01.007 ] 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ZYsWvtqxJx 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:01.007 rmmod nvme_rdma 00:31:01.007 rmmod nvme_fabrics 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1860289 ']' 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1860289 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1860289 ']' 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1860289 00:31:01.007 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:31:01.008 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.008 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860289 00:31:01.008 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:01.008 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:01.008 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860289' 00:31:01.008 killing process with pid 1860289 00:31:01.008 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1860289 00:31:01.008 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1860289 00:31:01.267 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:01.267 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:01.267 00:31:01.267 real 0m7.282s 00:31:01.267 user 0m2.882s 00:31:01.267 sys 0m4.906s 00:31:01.267 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.267 14:02:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.267 ************************************ 00:31:01.267 END TEST nvmf_async_init 00:31:01.267 ************************************ 00:31:01.267 14:02:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:31:01.267 14:02:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:01.267 14:02:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.267 14:02:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.267 ************************************ 00:31:01.267 START TEST dma 00:31:01.267 ************************************ 00:31:01.267 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:31:01.267 * Looking for test storage... 00:31:01.267 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:01.267 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:01.267 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:31:01.267 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:01.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.527 --rc genhtml_branch_coverage=1 00:31:01.527 --rc genhtml_function_coverage=1 00:31:01.527 --rc genhtml_legend=1 00:31:01.527 --rc geninfo_all_blocks=1 00:31:01.527 --rc geninfo_unexecuted_blocks=1 00:31:01.527 00:31:01.527 ' 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:01.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.527 --rc genhtml_branch_coverage=1 00:31:01.527 --rc genhtml_function_coverage=1 00:31:01.527 --rc genhtml_legend=1 00:31:01.527 --rc geninfo_all_blocks=1 00:31:01.527 --rc geninfo_unexecuted_blocks=1 00:31:01.527 00:31:01.527 ' 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:01.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.527 --rc genhtml_branch_coverage=1 00:31:01.527 --rc genhtml_function_coverage=1 00:31:01.527 --rc genhtml_legend=1 00:31:01.527 --rc geninfo_all_blocks=1 00:31:01.527 --rc geninfo_unexecuted_blocks=1 00:31:01.527 00:31:01.527 ' 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:01.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.527 --rc genhtml_branch_coverage=1 00:31:01.527 --rc genhtml_function_coverage=1 00:31:01.527 --rc genhtml_legend=1 00:31:01.527 --rc geninfo_all_blocks=1 00:31:01.527 --rc geninfo_unexecuted_blocks=1 00:31:01.527 00:31:01.527 ' 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.527 14:02:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:01.528 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:31:01.528 14:02:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:08.097 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:08.097 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:08.097 Found net devices under 0000:18:00.0: mlx_0_0 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:08.097 Found net devices under 0000:18:00.1: mlx_0_1 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:08.097 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:08.098 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:08.098 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:31:08.098 altname enp24s0f0np0 00:31:08.098 altname ens785f0np0 00:31:08.098 inet 192.168.100.8/24 scope global mlx_0_0 00:31:08.098 valid_lft forever preferred_lft forever 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:08.098 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:08.098 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:31:08.098 altname enp24s0f1np1 00:31:08.098 altname ens785f1np1 00:31:08.098 inet 192.168.100.9/24 scope global mlx_0_1 00:31:08.098 valid_lft forever preferred_lft forever 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:08.098 192.168.100.9' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:08.098 192.168.100.9' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:08.098 192.168.100.9' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=1863630 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 1863630 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 1863630 ']' 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.098 [2024-12-05 14:02:07.287850] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:08.098 [2024-12-05 14:02:07.287894] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.098 [2024-12-05 14:02:07.362036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:08.098 [2024-12-05 14:02:07.382566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.098 [2024-12-05 14:02:07.382602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.098 [2024-12-05 14:02:07.382608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.098 [2024-12-05 14:02:07.382613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.098 [2024-12-05 14:02:07.382618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.098 [2024-12-05 14:02:07.383651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.098 [2024-12-05 14:02:07.383652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.098 [2024-12-05 14:02:07.523318] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x83d860/0x841d50) succeed. 00:31:08.098 [2024-12-05 14:02:07.531330] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x83edb0/0x8833f0) succeed. 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.098 Malloc0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:08.098 [2024-12-05 14:02:07.674701] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.098 14:02:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:08.099 { 00:31:08.099 "params": { 00:31:08.099 "name": "Nvme$subsystem", 00:31:08.099 "trtype": "$TEST_TRANSPORT", 00:31:08.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.099 "adrfam": "ipv4", 00:31:08.099 "trsvcid": "$NVMF_PORT", 00:31:08.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.099 "hdgst": ${hdgst:-false}, 00:31:08.099 "ddgst": ${ddgst:-false} 00:31:08.099 }, 00:31:08.099 "method": "bdev_nvme_attach_controller" 00:31:08.099 } 00:31:08.099 EOF 00:31:08.099 )") 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:31:08.099 14:02:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:08.099 "params": { 00:31:08.099 "name": "Nvme0", 00:31:08.099 "trtype": "rdma", 00:31:08.099 "traddr": "192.168.100.8", 00:31:08.099 "adrfam": "ipv4", 00:31:08.099 "trsvcid": "4420", 00:31:08.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.099 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.099 "hdgst": false, 00:31:08.099 "ddgst": false 00:31:08.099 }, 00:31:08.099 "method": "bdev_nvme_attach_controller" 00:31:08.099 }' 00:31:08.099 [2024-12-05 14:02:07.723942] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:08.099 [2024-12-05 14:02:07.723994] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863838 ] 00:31:08.099 [2024-12-05 14:02:07.797876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:08.099 [2024-12-05 14:02:07.820487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:08.099 [2024-12-05 14:02:07.820488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.371 bdev Nvme0n1 reports 1 memory domains 00:31:13.371 bdev Nvme0n1 supports RDMA memory domain 00:31:13.371 Initialization complete, running randrw IO for 5 sec on 2 cores 00:31:13.371 ========================================================================== 00:31:13.371 Latency [us] 00:31:13.371 IOPS MiB/s Average min max 00:31:13.371 Core 2: 22372.58 87.39 714.53 237.11 9661.66 00:31:13.371 Core 3: 22287.40 87.06 717.27 233.19 9595.26 00:31:13.371 ========================================================================== 00:31:13.371 Total : 44659.98 174.45 715.90 233.19 9661.66 00:31:13.371 00:31:13.371 Total operations: 223352, translate 223352 pull_push 0 memzero 0 00:31:13.371 14:02:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:31:13.371 14:02:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:31:13.371 14:02:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:31:13.371 [2024-12-05 14:02:13.212830] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:13.371 [2024-12-05 14:02:13.212876] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864661 ] 00:31:13.630 [2024-12-05 14:02:13.284533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:13.630 [2024-12-05 14:02:13.306141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.630 [2024-12-05 14:02:13.306141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.912 bdev Malloc0 reports 2 memory domains 00:31:18.912 bdev Malloc0 doesn't support RDMA memory domain 00:31:18.912 Initialization complete, running randrw IO for 5 sec on 2 cores 00:31:18.912 ========================================================================== 00:31:18.912 Latency [us] 00:31:18.912 IOPS MiB/s Average min max 00:31:18.912 Core 2: 14706.27 57.45 1087.30 405.73 1397.53 00:31:18.912 Core 3: 14913.20 58.25 1072.22 424.00 1713.41 00:31:18.912 ========================================================================== 00:31:18.912 Total : 29619.47 115.70 1079.71 405.73 1713.41 00:31:18.912 00:31:18.912 Total operations: 148149, translate 0 pull_push 592596 memzero 0 00:31:18.913 14:02:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:31:18.913 14:02:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:31:18.913 14:02:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:31:18.913 14:02:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:31:18.913 Ignoring -M option 00:31:18.913 [2024-12-05 14:02:18.589724] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:18.913 [2024-12-05 14:02:18.589772] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865683 ] 00:31:18.913 [2024-12-05 14:02:18.642962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:18.913 [2024-12-05 14:02:18.664741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:18.913 [2024-12-05 14:02:18.664745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.481 bdev 160c3ef2-2a22-4051-8d1b-c30501ab004f reports 1 memory domains 00:31:25.481 bdev 160c3ef2-2a22-4051-8d1b-c30501ab004f supports RDMA memory domain 00:31:25.481 Initialization complete, running randread IO for 5 sec on 2 cores 00:31:25.481 ========================================================================== 00:31:25.481 Latency [us] 00:31:25.481 IOPS MiB/s Average min max 00:31:25.481 Core 2: 79892.65 312.08 199.56 76.30 3423.07 00:31:25.481 Core 3: 76287.69 298.00 208.95 69.44 3352.01 00:31:25.481 ========================================================================== 00:31:25.481 Total : 156180.35 610.08 204.15 69.44 3423.07 00:31:25.481 00:31:25.481 Total operations: 780997, translate 0 pull_push 0 memzero 780997 00:31:25.481 14:02:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:31:25.481 [2024-12-05 14:02:24.167575] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:26.856 Initializing NVMe Controllers 00:31:26.856 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:31:26.856 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:26.856 Initialization complete. Launching workers. 00:31:26.856 ======================================================== 00:31:26.856 Latency(us) 00:31:26.856 Device Information : IOPS MiB/s Average min max 00:31:26.856 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.59 7.91 7964.84 6176.62 8788.50 00:31:26.856 ======================================================== 00:31:26.856 Total : 2024.59 7.91 7964.84 6176.62 8788.50 00:31:26.856 00:31:26.856 14:02:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:31:26.856 14:02:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:31:26.856 14:02:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:31:26.856 14:02:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:31:26.856 [2024-12-05 14:02:26.510802] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:26.856 [2024-12-05 14:02:26.510843] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1867001 ] 00:31:26.856 [2024-12-05 14:02:26.564556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:26.856 [2024-12-05 14:02:26.586928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.856 [2024-12-05 14:02:26.586930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.424 bdev a17cbf7d-5c9a-4a44-a28e-e6f9a32348ed reports 1 memory domains 00:31:33.424 bdev a17cbf7d-5c9a-4a44-a28e-e6f9a32348ed supports RDMA memory domain 00:31:33.424 Initialization complete, running randrw IO for 5 sec on 2 cores 00:31:33.424 ========================================================================== 00:31:33.424 Latency [us] 00:31:33.424 IOPS MiB/s Average min max 00:31:33.424 Core 2: 19859.89 77.58 805.00 14.39 11986.01 00:31:33.424 Core 3: 20136.80 78.66 793.96 11.78 11644.30 00:31:33.424 ========================================================================== 00:31:33.424 Total : 39996.69 156.24 799.44 11.78 11986.01 00:31:33.424 00:31:33.424 Total operations: 200045, translate 199939 pull_push 0 memzero 106 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:33.424 14:02:31 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:33.424 rmmod nvme_rdma 00:31:33.424 rmmod nvme_fabrics 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 1863630 ']' 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 1863630 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 1863630 ']' 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 1863630 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1863630 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1863630' 00:31:33.424 killing process with pid 1863630 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 1863630 00:31:33.424 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 1863630 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:33.425 00:31:33.425 real 0m31.331s 00:31:33.425 user 1m33.996s 00:31:33.425 sys 0m5.600s 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:33.425 ************************************ 00:31:33.425 END TEST dma 00:31:33.425 ************************************ 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.425 ************************************ 00:31:33.425 START TEST nvmf_identify 00:31:33.425 ************************************ 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:31:33.425 * Looking for test storage... 00:31:33.425 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:33.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.425 --rc genhtml_branch_coverage=1 00:31:33.425 --rc genhtml_function_coverage=1 00:31:33.425 --rc genhtml_legend=1 00:31:33.425 --rc geninfo_all_blocks=1 00:31:33.425 --rc geninfo_unexecuted_blocks=1 00:31:33.425 00:31:33.425 ' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:33.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.425 --rc genhtml_branch_coverage=1 00:31:33.425 --rc genhtml_function_coverage=1 00:31:33.425 --rc genhtml_legend=1 00:31:33.425 --rc geninfo_all_blocks=1 00:31:33.425 --rc geninfo_unexecuted_blocks=1 00:31:33.425 00:31:33.425 ' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:33.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.425 --rc genhtml_branch_coverage=1 00:31:33.425 --rc genhtml_function_coverage=1 00:31:33.425 --rc genhtml_legend=1 00:31:33.425 --rc geninfo_all_blocks=1 00:31:33.425 --rc geninfo_unexecuted_blocks=1 00:31:33.425 00:31:33.425 ' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:33.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.425 --rc genhtml_branch_coverage=1 00:31:33.425 --rc genhtml_function_coverage=1 00:31:33.425 --rc genhtml_legend=1 00:31:33.425 --rc geninfo_all_blocks=1 00:31:33.425 --rc geninfo_unexecuted_blocks=1 00:31:33.425 00:31:33.425 ' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.425 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:33.426 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.426 14:02:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:38.714 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:38.714 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:38.714 Found net devices under 0000:18:00.0: mlx_0_0 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:38.714 Found net devices under 0000:18:00.1: mlx_0_1 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:38.714 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:38.715 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:38.715 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:31:38.715 altname enp24s0f0np0 00:31:38.715 altname ens785f0np0 00:31:38.715 inet 192.168.100.8/24 scope global mlx_0_0 00:31:38.715 valid_lft forever preferred_lft forever 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:38.715 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:38.715 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:31:38.715 altname enp24s0f1np1 00:31:38.715 altname ens785f1np1 00:31:38.715 inet 192.168.100.9/24 scope global mlx_0_1 00:31:38.715 valid_lft forever preferred_lft forever 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:38.715 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:38.975 192.168.100.9' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:38.975 192.168.100.9' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:38.975 192.168.100.9' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1871298 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1871298 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1871298 ']' 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.975 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.975 [2024-12-05 14:02:38.683946] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:38.975 [2024-12-05 14:02:38.683992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.975 [2024-12-05 14:02:38.760282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:38.975 [2024-12-05 14:02:38.782482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.975 [2024-12-05 14:02:38.782524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.975 [2024-12-05 14:02:38.782530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.975 [2024-12-05 14:02:38.782535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.975 [2024-12-05 14:02:38.782540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.976 [2024-12-05 14:02:38.783729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.976 [2024-12-05 14:02:38.783830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.976 [2024-12-05 14:02:38.783913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:38.976 [2024-12-05 14:02:38.783912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.235 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.235 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:31:39.235 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:39.235 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.235 14:02:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.235 [2024-12-05 14:02:38.898084] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xac5f30/0xaca420) succeed. 00:31:39.235 [2024-12-05 14:02:38.906278] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xac75c0/0xb0bac0) succeed. 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.235 Malloc0 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.235 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.499 [2024-12-05 14:02:39.111001] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.499 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.499 [ 00:31:39.499 { 00:31:39.499 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:39.499 "subtype": "Discovery", 00:31:39.499 "listen_addresses": [ 00:31:39.499 { 00:31:39.499 "trtype": "RDMA", 00:31:39.500 "adrfam": "IPv4", 00:31:39.500 "traddr": "192.168.100.8", 00:31:39.500 "trsvcid": "4420" 00:31:39.500 } 00:31:39.500 ], 00:31:39.500 "allow_any_host": true, 00:31:39.500 "hosts": [] 00:31:39.500 }, 00:31:39.500 { 00:31:39.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:39.500 "subtype": "NVMe", 00:31:39.500 "listen_addresses": [ 00:31:39.500 { 00:31:39.500 "trtype": "RDMA", 00:31:39.500 "adrfam": "IPv4", 00:31:39.500 "traddr": "192.168.100.8", 00:31:39.500 "trsvcid": "4420" 00:31:39.500 } 00:31:39.500 ], 00:31:39.500 "allow_any_host": true, 00:31:39.500 "hosts": [], 00:31:39.500 "serial_number": "SPDK00000000000001", 00:31:39.500 "model_number": "SPDK bdev Controller", 00:31:39.500 "max_namespaces": 32, 00:31:39.500 "min_cntlid": 1, 00:31:39.500 "max_cntlid": 65519, 00:31:39.500 "namespaces": [ 00:31:39.500 { 00:31:39.500 "nsid": 1, 00:31:39.500 "bdev_name": "Malloc0", 00:31:39.500 "name": "Malloc0", 00:31:39.500 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:39.500 "eui64": "ABCDEF0123456789", 00:31:39.500 "uuid": "c353aa0b-361a-4b11-83f1-aa008741e46a" 00:31:39.500 } 00:31:39.500 ] 00:31:39.500 } 00:31:39.500 ] 00:31:39.500 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.500 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:39.500 [2024-12-05 14:02:39.161187] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:39.500 [2024-12-05 14:02:39.161218] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871444 ] 00:31:39.500 [2024-12-05 14:02:39.216619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:39.500 [2024-12-05 14:02:39.216697] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:31:39.500 [2024-12-05 14:02:39.216708] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:31:39.500 [2024-12-05 14:02:39.216711] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:31:39.500 [2024-12-05 14:02:39.216741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:39.500 [2024-12-05 14:02:39.227906] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:31:39.500 [2024-12-05 14:02:39.237608] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:39.500 [2024-12-05 14:02:39.237621] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:31:39.500 [2024-12-05 14:02:39.237627] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237632] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237636] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237640] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237644] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237648] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237652] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237656] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237660] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237664] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237668] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237672] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237676] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237680] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237684] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237688] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237692] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237696] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237700] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237704] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237708] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237712] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237715] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237720] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237723] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237727] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237731] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237735] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237739] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237743] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237747] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237752] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:31:39.500 [2024-12-05 14:02:39.237756] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:39.500 [2024-12-05 14:02:39.237759] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:31:39.500 [2024-12-05 14:02:39.237778] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.237795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181900 00:31:39.500 [2024-12-05 14:02:39.243380] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.500 [2024-12-05 14:02:39.243390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:39.500 [2024-12-05 14:02:39.243397] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.243402] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:39.500 [2024-12-05 14:02:39.243408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:39.500 [2024-12-05 14:02:39.243412] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:39.500 [2024-12-05 14:02:39.243425] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.243431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.500 [2024-12-05 14:02:39.243451] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.500 [2024-12-05 14:02:39.243456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:31:39.500 [2024-12-05 14:02:39.243461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:39.500 [2024-12-05 14:02:39.243465] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.500 [2024-12-05 14:02:39.243469] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:39.501 [2024-12-05 14:02:39.243474] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.501 [2024-12-05 14:02:39.243497] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.501 [2024-12-05 14:02:39.243502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:31:39.501 [2024-12-05 14:02:39.243506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:39.501 [2024-12-05 14:02:39.243510] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:39.501 [2024-12-05 14:02:39.243520] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.501 [2024-12-05 14:02:39.243548] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.501 [2024-12-05 14:02:39.243552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:39.501 [2024-12-05 14:02:39.243560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:39.501 [2024-12-05 14:02:39.243564] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243570] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.501 [2024-12-05 14:02:39.243596] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.501 [2024-12-05 14:02:39.243600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:39.501 [2024-12-05 14:02:39.243605] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:39.501 [2024-12-05 14:02:39.243609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:39.501 [2024-12-05 14:02:39.243613] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:39.501 [2024-12-05 14:02:39.243724] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:39.501 [2024-12-05 14:02:39.243728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:39.501 [2024-12-05 14:02:39.243735] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.501 [2024-12-05 14:02:39.243759] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.501 [2024-12-05 14:02:39.243763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:39.501 [2024-12-05 14:02:39.243768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:39.501 [2024-12-05 14:02:39.243771] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243777] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.501 [2024-12-05 14:02:39.243796] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.501 [2024-12-05 14:02:39.243800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:39.501 [2024-12-05 14:02:39.243804] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:39.501 [2024-12-05 14:02:39.243808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:39.501 [2024-12-05 14:02:39.243812] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243816] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:39.501 [2024-12-05 14:02:39.243822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:39.501 [2024-12-05 14:02:39.243831] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:31:39.501 [2024-12-05 14:02:39.243880] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.501 [2024-12-05 14:02:39.243884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:39.501 [2024-12-05 14:02:39.243890] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:39.501 [2024-12-05 14:02:39.243895] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:39.501 [2024-12-05 14:02:39.243898] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:39.501 [2024-12-05 14:02:39.243904] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:39.501 [2024-12-05 14:02:39.243908] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:39.501 [2024-12-05 14:02:39.243912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:39.501 [2024-12-05 14:02:39.243916] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:39.501 [2024-12-05 14:02:39.243926] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.501 [2024-12-05 14:02:39.243958] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.501 [2024-12-05 14:02:39.243962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:39.501 [2024-12-05 14:02:39.243969] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.501 [2024-12-05 14:02:39.243979] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.501 [2024-12-05 14:02:39.243988] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.243993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.501 [2024-12-05 14:02:39.243998] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.244002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.501 [2024-12-05 14:02:39.244006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:39.501 [2024-12-05 14:02:39.244009] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.244017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:39.501 [2024-12-05 14:02:39.244022] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.501 [2024-12-05 14:02:39.244027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.502 [2024-12-05 14:02:39.244045] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.502 [2024-12-05 14:02:39.244049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:31:39.502 [2024-12-05 14:02:39.244053] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:39.502 [2024-12-05 14:02:39.244058] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:39.502 [2024-12-05 14:02:39.244061] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244068] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:31:39.502 [2024-12-05 14:02:39.244097] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.502 [2024-12-05 14:02:39.244101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:39.502 [2024-12-05 14:02:39.244107] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244114] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:39.502 [2024-12-05 14:02:39.244133] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x181900 00:31:39.502 [2024-12-05 14:02:39.244144] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.502 [2024-12-05 14:02:39.244172] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.502 [2024-12-05 14:02:39.244176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:39.502 [2024-12-05 14:02:39.244185] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x181900 00:31:39.502 [2024-12-05 14:02:39.244194] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244198] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.502 [2024-12-05 14:02:39.244202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:39.502 [2024-12-05 14:02:39.244206] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244223] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.502 [2024-12-05 14:02:39.244227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:39.502 [2024-12-05 14:02:39.244235] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x181900 00:31:39.502 [2024-12-05 14:02:39.244244] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:31:39.502 [2024-12-05 14:02:39.244260] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.502 [2024-12-05 14:02:39.244264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:39.502 [2024-12-05 14:02:39.244271] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:31:39.502 ===================================================== 00:31:39.502 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:39.502 ===================================================== 00:31:39.502 Controller Capabilities/Features 00:31:39.502 ================================ 00:31:39.502 Vendor ID: 0000 00:31:39.502 Subsystem Vendor ID: 0000 00:31:39.502 Serial Number: .................... 00:31:39.502 Model Number: ........................................ 00:31:39.502 Firmware Version: 25.01 00:31:39.502 Recommended Arb Burst: 0 00:31:39.502 IEEE OUI Identifier: 00 00 00 00:31:39.502 Multi-path I/O 00:31:39.502 May have multiple subsystem ports: No 00:31:39.502 May have multiple controllers: No 00:31:39.502 Associated with SR-IOV VF: No 00:31:39.502 Max Data Transfer Size: 131072 00:31:39.502 Max Number of Namespaces: 0 00:31:39.502 Max Number of I/O Queues: 1024 00:31:39.502 NVMe Specification Version (VS): 1.3 00:31:39.502 NVMe Specification Version (Identify): 1.3 00:31:39.502 Maximum Queue Entries: 128 00:31:39.502 Contiguous Queues Required: Yes 00:31:39.502 Arbitration Mechanisms Supported 00:31:39.502 Weighted Round Robin: Not Supported 00:31:39.502 Vendor Specific: Not Supported 00:31:39.502 Reset Timeout: 15000 ms 00:31:39.502 Doorbell Stride: 4 bytes 00:31:39.502 NVM Subsystem Reset: Not Supported 00:31:39.502 Command Sets Supported 00:31:39.502 NVM Command Set: Supported 00:31:39.502 Boot Partition: Not Supported 00:31:39.502 Memory Page Size Minimum: 4096 bytes 00:31:39.502 Memory Page Size Maximum: 4096 bytes 00:31:39.502 Persistent Memory Region: Not Supported 00:31:39.502 Optional Asynchronous Events Supported 00:31:39.502 Namespace Attribute Notices: Not Supported 00:31:39.502 Firmware Activation Notices: Not Supported 00:31:39.502 ANA Change Notices: Not Supported 00:31:39.502 PLE Aggregate Log Change Notices: Not Supported 00:31:39.502 LBA Status Info Alert Notices: Not Supported 00:31:39.502 EGE Aggregate Log Change Notices: Not Supported 00:31:39.502 Normal NVM Subsystem Shutdown event: Not Supported 00:31:39.502 Zone Descriptor Change Notices: Not Supported 00:31:39.502 Discovery Log Change Notices: Supported 00:31:39.502 Controller Attributes 00:31:39.502 128-bit Host Identifier: Not Supported 00:31:39.502 Non-Operational Permissive Mode: Not Supported 00:31:39.502 NVM Sets: Not Supported 00:31:39.502 Read Recovery Levels: Not Supported 00:31:39.502 Endurance Groups: Not Supported 00:31:39.502 Predictable Latency Mode: Not Supported 00:31:39.502 Traffic Based Keep ALive: Not Supported 00:31:39.502 Namespace Granularity: Not Supported 00:31:39.502 SQ Associations: Not Supported 00:31:39.502 UUID List: Not Supported 00:31:39.502 Multi-Domain Subsystem: Not Supported 00:31:39.502 Fixed Capacity Management: Not Supported 00:31:39.502 Variable Capacity Management: Not Supported 00:31:39.502 Delete Endurance Group: Not Supported 00:31:39.502 Delete NVM Set: Not Supported 00:31:39.502 Extended LBA Formats Supported: Not Supported 00:31:39.502 Flexible Data Placement Supported: Not Supported 00:31:39.502 00:31:39.502 Controller Memory Buffer Support 00:31:39.502 ================================ 00:31:39.502 Supported: No 00:31:39.502 00:31:39.502 Persistent Memory Region Support 00:31:39.502 ================================ 00:31:39.502 Supported: No 00:31:39.502 00:31:39.502 Admin Command Set Attributes 00:31:39.502 ============================ 00:31:39.502 Security Send/Receive: Not Supported 00:31:39.502 Format NVM: Not Supported 00:31:39.502 Firmware Activate/Download: Not Supported 00:31:39.502 Namespace Management: Not Supported 00:31:39.502 Device Self-Test: Not Supported 00:31:39.502 Directives: Not Supported 00:31:39.502 NVMe-MI: Not Supported 00:31:39.502 Virtualization Management: Not Supported 00:31:39.502 Doorbell Buffer Config: Not Supported 00:31:39.502 Get LBA Status Capability: Not Supported 00:31:39.502 Command & Feature Lockdown Capability: Not Supported 00:31:39.502 Abort Command Limit: 1 00:31:39.502 Async Event Request Limit: 4 00:31:39.502 Number of Firmware Slots: N/A 00:31:39.502 Firmware Slot 1 Read-Only: N/A 00:31:39.502 Firmware Activation Without Reset: N/A 00:31:39.502 Multiple Update Detection Support: N/A 00:31:39.503 Firmware Update Granularity: No Information Provided 00:31:39.503 Per-Namespace SMART Log: No 00:31:39.503 Asymmetric Namespace Access Log Page: Not Supported 00:31:39.503 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:39.503 Command Effects Log Page: Not Supported 00:31:39.503 Get Log Page Extended Data: Supported 00:31:39.503 Telemetry Log Pages: Not Supported 00:31:39.503 Persistent Event Log Pages: Not Supported 00:31:39.503 Supported Log Pages Log Page: May Support 00:31:39.503 Commands Supported & Effects Log Page: Not Supported 00:31:39.503 Feature Identifiers & Effects Log Page:May Support 00:31:39.503 NVMe-MI Commands & Effects Log Page: May Support 00:31:39.503 Data Area 4 for Telemetry Log: Not Supported 00:31:39.503 Error Log Page Entries Supported: 128 00:31:39.503 Keep Alive: Not Supported 00:31:39.503 00:31:39.503 NVM Command Set Attributes 00:31:39.503 ========================== 00:31:39.503 Submission Queue Entry Size 00:31:39.503 Max: 1 00:31:39.503 Min: 1 00:31:39.503 Completion Queue Entry Size 00:31:39.503 Max: 1 00:31:39.503 Min: 1 00:31:39.503 Number of Namespaces: 0 00:31:39.503 Compare Command: Not Supported 00:31:39.503 Write Uncorrectable Command: Not Supported 00:31:39.503 Dataset Management Command: Not Supported 00:31:39.503 Write Zeroes Command: Not Supported 00:31:39.503 Set Features Save Field: Not Supported 00:31:39.503 Reservations: Not Supported 00:31:39.503 Timestamp: Not Supported 00:31:39.503 Copy: Not Supported 00:31:39.503 Volatile Write Cache: Not Present 00:31:39.503 Atomic Write Unit (Normal): 1 00:31:39.503 Atomic Write Unit (PFail): 1 00:31:39.503 Atomic Compare & Write Unit: 1 00:31:39.503 Fused Compare & Write: Supported 00:31:39.503 Scatter-Gather List 00:31:39.503 SGL Command Set: Supported 00:31:39.503 SGL Keyed: Supported 00:31:39.503 SGL Bit Bucket Descriptor: Not Supported 00:31:39.503 SGL Metadata Pointer: Not Supported 00:31:39.503 Oversized SGL: Not Supported 00:31:39.503 SGL Metadata Address: Not Supported 00:31:39.503 SGL Offset: Supported 00:31:39.503 Transport SGL Data Block: Not Supported 00:31:39.503 Replay Protected Memory Block: Not Supported 00:31:39.503 00:31:39.503 Firmware Slot Information 00:31:39.503 ========================= 00:31:39.503 Active slot: 0 00:31:39.503 00:31:39.503 00:31:39.503 Error Log 00:31:39.503 ========= 00:31:39.503 00:31:39.503 Active Namespaces 00:31:39.503 ================= 00:31:39.503 Discovery Log Page 00:31:39.503 ================== 00:31:39.503 Generation Counter: 2 00:31:39.503 Number of Records: 2 00:31:39.503 Record Format: 0 00:31:39.503 00:31:39.503 Discovery Log Entry 0 00:31:39.503 ---------------------- 00:31:39.503 Transport Type: 1 (RDMA) 00:31:39.503 Address Family: 1 (IPv4) 00:31:39.503 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:39.503 Entry Flags: 00:31:39.503 Duplicate Returned Information: 1 00:31:39.503 Explicit Persistent Connection Support for Discovery: 1 00:31:39.503 Transport Requirements: 00:31:39.503 Secure Channel: Not Required 00:31:39.503 Port ID: 0 (0x0000) 00:31:39.503 Controller ID: 65535 (0xffff) 00:31:39.503 Admin Max SQ Size: 128 00:31:39.503 Transport Service Identifier: 4420 00:31:39.503 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:39.503 Transport Address: 192.168.100.8 00:31:39.503 Transport Specific Address Subtype - RDMA 00:31:39.503 RDMA QP Service Type: 1 (Reliable Connected) 00:31:39.503 RDMA Provider Type: 1 (No provider specified) 00:31:39.503 RDMA CM Service: 1 (RDMA_CM) 00:31:39.503 Discovery Log Entry 1 00:31:39.503 ---------------------- 00:31:39.503 Transport Type: 1 (RDMA) 00:31:39.503 Address Family: 1 (IPv4) 00:31:39.503 Subsystem Type: 2 (NVM Subsystem) 00:31:39.503 Entry Flags: 00:31:39.503 Duplicate Returned Information: 0 00:31:39.503 Explicit Persistent Connection Support for Discovery: 0 00:31:39.503 Transport Requirements: 00:31:39.503 Secure Channel: Not Required 00:31:39.503 Port ID: 0 (0x0000) 00:31:39.503 Controller ID: 65535 (0xffff) 00:31:39.503 Admin Max SQ Size: [2024-12-05 14:02:39.244334] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:39.503 [2024-12-05 14:02:39.244342] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 13520 doesn't match qid 00:31:39.503 [2024-12-05 14:02:39.244353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:ecb51c10 sqhd:3a40 p:0 m:0 dnr:0 00:31:39.503 [2024-12-05 14:02:39.244357] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 13520 doesn't match qid 00:31:39.503 [2024-12-05 14:02:39.244364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:ecb51c10 sqhd:3a40 p:0 m:0 dnr:0 00:31:39.503 [2024-12-05 14:02:39.244368] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 13520 doesn't match qid 00:31:39.503 [2024-12-05 14:02:39.244373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:ecb51c10 sqhd:3a40 p:0 m:0 dnr:0 00:31:39.503 [2024-12-05 14:02:39.244383] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 13520 doesn't match qid 00:31:39.503 [2024-12-05 14:02:39.244388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:ecb51c10 sqhd:3a40 p:0 m:0 dnr:0 00:31:39.503 [2024-12-05 14:02:39.244397] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181900 00:31:39.503 [2024-12-05 14:02:39.244403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.503 [2024-12-05 14:02:39.244425] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.503 [2024-12-05 14:02:39.244429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:31:39.503 [2024-12-05 14:02:39.244435] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.503 [2024-12-05 14:02:39.244440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.503 [2024-12-05 14:02:39.244444] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:31:39.503 [2024-12-05 14:02:39.244462] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.503 [2024-12-05 14:02:39.244465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:39.503 [2024-12-05 14:02:39.244470] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:39.503 [2024-12-05 14:02:39.244474] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:39.503 [2024-12-05 14:02:39.244477] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:31:39.503 [2024-12-05 14:02:39.244484] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.503 [2024-12-05 14:02:39.244489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.503 [2024-12-05 14:02:39.244509] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.503 [2024-12-05 14:02:39.244513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:31:39.503 [2024-12-05 14:02:39.244518] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244524] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244547] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244555] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244562] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244586] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244594] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244600] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244625] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244634] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244640] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244661] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244669] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244676] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244697] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244705] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244712] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244735] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244743] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244749] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244768] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244776] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244783] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244805] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244813] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244819] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244845] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244853] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244859] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244881] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244889] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244895] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244917] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244925] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244931] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244955] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244963] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244969] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.244974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.244988] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.244992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.244996] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.245003] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.245008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.245022] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.245026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.245030] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.245036] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.504 [2024-12-05 14:02:39.245042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.504 [2024-12-05 14:02:39.245060] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.504 [2024-12-05 14:02:39.245064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:39.504 [2024-12-05 14:02:39.245068] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245074] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245100] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245108] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245115] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245135] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245143] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245151] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245170] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245178] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245184] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245206] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245214] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245220] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245243] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245250] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245257] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245279] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245286] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245293] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245316] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245324] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245330] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245357] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245366] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245373] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245399] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245407] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245413] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245436] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245444] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245451] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.505 [2024-12-05 14:02:39.245470] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.505 [2024-12-05 14:02:39.245482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:31:39.505 [2024-12-05 14:02:39.245486] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245493] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.505 [2024-12-05 14:02:39.245498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245516] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245524] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245530] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245554] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245562] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245568] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245590] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245600] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245606] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245627] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245634] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245640] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245663] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245671] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245677] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245696] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245704] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245710] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245735] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245743] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245749] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245771] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245779] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245785] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245805] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245814] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245820] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245839] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245847] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245853] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245872] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245880] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245886] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245910] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245918] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245924] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245945] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245953] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245959] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.245981] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.245985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.245989] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.245995] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.246000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.246019] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.506 [2024-12-05 14:02:39.246024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:39.506 [2024-12-05 14:02:39.246028] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.246034] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.506 [2024-12-05 14:02:39.246039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.506 [2024-12-05 14:02:39.246060] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246068] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246074] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246095] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246103] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246109] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246133] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246140] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246147] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246172] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246180] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246186] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246208] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246216] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246222] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246248] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246256] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246262] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246286] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246294] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246300] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246320] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246328] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246335] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246354] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246362] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246368] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246396] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246404] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246410] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246433] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246440] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246447] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246470] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246478] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246485] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246507] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246515] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246521] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246545] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246552] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246559] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246584] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:31:39.507 [2024-12-05 14:02:39.246592] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246598] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.507 [2024-12-05 14:02:39.246603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.507 [2024-12-05 14:02:39.246625] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.507 [2024-12-05 14:02:39.246628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246632] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246639] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246661] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246669] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246675] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246702] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246710] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246716] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246737] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246745] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246751] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246773] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246781] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246788] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246813] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246821] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246827] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246848] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246856] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246862] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246884] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246892] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246898] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246922] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246930] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246936] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246957] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246965] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246971] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.246976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.246991] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.246994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.246998] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247005] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.247027] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.247031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.247035] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247041] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.247063] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.247067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.247071] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247077] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.247101] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.247105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.247109] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247118] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.247144] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.247148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.247152] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247158] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.508 [2024-12-05 14:02:39.247183] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.508 [2024-12-05 14:02:39.247187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:31:39.508 [2024-12-05 14:02:39.247191] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:31:39.508 [2024-12-05 14:02:39.247197] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.509 [2024-12-05 14:02:39.247220] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.509 [2024-12-05 14:02:39.247223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:39.509 [2024-12-05 14:02:39.247227] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247234] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.509 [2024-12-05 14:02:39.247256] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.509 [2024-12-05 14:02:39.247260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:39.509 [2024-12-05 14:02:39.247264] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247270] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.509 [2024-12-05 14:02:39.247291] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.509 [2024-12-05 14:02:39.247295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:39.509 [2024-12-05 14:02:39.247299] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247305] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.509 [2024-12-05 14:02:39.247328] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.509 [2024-12-05 14:02:39.247331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:39.509 [2024-12-05 14:02:39.247335] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247343] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.247348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.509 [2024-12-05 14:02:39.247367] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.509 [2024-12-05 14:02:39.247371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:39.509 [2024-12-05 14:02:39.251381] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.251388] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.251394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.509 [2024-12-05 14:02:39.251412] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.509 [2024-12-05 14:02:39.251416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0000 p:0 m:0 dnr:0 00:31:39.509 [2024-12-05 14:02:39.251420] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.509 [2024-12-05 14:02:39.251425] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:31:39.509 128 00:31:39.509 Transport Service Identifier: 4420 00:31:39.509 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:39.509 Transport Address: 192.168.100.8 00:31:39.509 Transport Specific Address Subtype - RDMA 00:31:39.509 RDMA QP Service Type: 1 (Reliable Connected) 00:31:39.509 RDMA Provider Type: 1 (No provider specified) 00:31:39.509 RDMA CM Service: 1 (RDMA_CM) 00:31:39.509 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:39.509 [2024-12-05 14:02:39.320200] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:39.509 [2024-12-05 14:02:39.320244] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871525 ] 00:31:39.775 [2024-12-05 14:02:39.374653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:39.775 [2024-12-05 14:02:39.374713] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:31:39.775 [2024-12-05 14:02:39.374723] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:31:39.775 [2024-12-05 14:02:39.374726] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:31:39.775 [2024-12-05 14:02:39.374748] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:39.775 [2024-12-05 14:02:39.384028] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:31:39.775 [2024-12-05 14:02:39.395697] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:39.776 [2024-12-05 14:02:39.395706] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:31:39.776 [2024-12-05 14:02:39.395711] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395719] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395723] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395727] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395731] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395735] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395739] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395743] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395747] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395751] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395755] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395759] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395763] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395767] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395771] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395775] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395779] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395783] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395787] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395791] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395795] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395799] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395803] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395807] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395811] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395815] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395819] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395823] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395827] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395831] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395835] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395839] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:31:39.776 [2024-12-05 14:02:39.395842] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:39.776 [2024-12-05 14:02:39.395847] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:31:39.776 [2024-12-05 14:02:39.395860] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.395870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181900 00:31:39.776 [2024-12-05 14:02:39.401380] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.776 [2024-12-05 14:02:39.401388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:39.776 [2024-12-05 14:02:39.401393] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401398] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:39.776 [2024-12-05 14:02:39.401403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:39.776 [2024-12-05 14:02:39.401408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:39.776 [2024-12-05 14:02:39.401417] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.776 [2024-12-05 14:02:39.401440] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.776 [2024-12-05 14:02:39.401444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:31:39.776 [2024-12-05 14:02:39.401449] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:39.776 [2024-12-05 14:02:39.401453] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:39.776 [2024-12-05 14:02:39.401462] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.776 [2024-12-05 14:02:39.401486] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.776 [2024-12-05 14:02:39.401490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:31:39.776 [2024-12-05 14:02:39.401495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:39.776 [2024-12-05 14:02:39.401499] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:39.776 [2024-12-05 14:02:39.401508] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.776 [2024-12-05 14:02:39.401534] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.776 [2024-12-05 14:02:39.401538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:39.776 [2024-12-05 14:02:39.401542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:39.776 [2024-12-05 14:02:39.401546] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401554] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.776 [2024-12-05 14:02:39.401574] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.776 [2024-12-05 14:02:39.401578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:39.776 [2024-12-05 14:02:39.401581] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:39.776 [2024-12-05 14:02:39.401585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:39.776 [2024-12-05 14:02:39.401589] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:31:39.776 [2024-12-05 14:02:39.401594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:39.776 [2024-12-05 14:02:39.401700] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:39.776 [2024-12-05 14:02:39.401704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:39.776 [2024-12-05 14:02:39.401710] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.777 [2024-12-05 14:02:39.401733] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.777 [2024-12-05 14:02:39.401736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:39.777 [2024-12-05 14:02:39.401740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:39.777 [2024-12-05 14:02:39.401744] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401750] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.777 [2024-12-05 14:02:39.401770] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.777 [2024-12-05 14:02:39.401774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:39.777 [2024-12-05 14:02:39.401778] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:39.777 [2024-12-05 14:02:39.401781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.401785] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:39.777 [2024-12-05 14:02:39.401795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.401802] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:31:39.777 [2024-12-05 14:02:39.401838] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.777 [2024-12-05 14:02:39.401842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:39.777 [2024-12-05 14:02:39.401848] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:39.777 [2024-12-05 14:02:39.401852] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:39.777 [2024-12-05 14:02:39.401856] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:39.777 [2024-12-05 14:02:39.401862] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:39.777 [2024-12-05 14:02:39.401866] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:39.777 [2024-12-05 14:02:39.401870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.401874] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.401884] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.777 [2024-12-05 14:02:39.401907] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.777 [2024-12-05 14:02:39.401911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:39.777 [2024-12-05 14:02:39.401916] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.777 [2024-12-05 14:02:39.401926] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.777 [2024-12-05 14:02:39.401935] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.777 [2024-12-05 14:02:39.401945] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.777 [2024-12-05 14:02:39.401953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.401956] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.401967] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.401973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.777 [2024-12-05 14:02:39.401992] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.777 [2024-12-05 14:02:39.401996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:31:39.777 [2024-12-05 14:02:39.402000] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:39.777 [2024-12-05 14:02:39.402004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.402008] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.402013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.402018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.402023] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.402028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.777 [2024-12-05 14:02:39.402043] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.777 [2024-12-05 14:02:39.402047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:31:39.777 [2024-12-05 14:02:39.402094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.402098] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.402103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.402109] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.402115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181900 00:31:39.777 [2024-12-05 14:02:39.402140] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.777 [2024-12-05 14:02:39.402143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:39.777 [2024-12-05 14:02:39.402150] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:39.777 [2024-12-05 14:02:39.402161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.402165] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.402171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:39.777 [2024-12-05 14:02:39.402176] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.777 [2024-12-05 14:02:39.402181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:31:39.777 [2024-12-05 14:02:39.402210] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402229] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402240] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:31:39.778 [2024-12-05 14:02:39.402273] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402287] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402316] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:39.778 [2024-12-05 14:02:39.402320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:39.778 [2024-12-05 14:02:39.402324] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:39.778 [2024-12-05 14:02:39.402335] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.778 [2024-12-05 14:02:39.402345] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.778 [2024-12-05 14:02:39.402357] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402366] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402372] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.778 [2024-12-05 14:02:39.402388] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402397] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402401] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402409] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402415] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.778 [2024-12-05 14:02:39.402437] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402445] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402451] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.778 [2024-12-05 14:02:39.402473] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402481] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402491] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x181900 00:31:39.778 [2024-12-05 14:02:39.402502] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x181900 00:31:39.778 [2024-12-05 14:02:39.402513] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x181900 00:31:39.778 [2024-12-05 14:02:39.402525] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x181900 00:31:39.778 [2024-12-05 14:02:39.402536] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402547] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402556] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402570] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402573] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402582] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:31:39.778 [2024-12-05 14:02:39.402588] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.778 [2024-12-05 14:02:39.402592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:39.778 [2024-12-05 14:02:39.402598] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:31:39.778 ===================================================== 00:31:39.778 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.778 ===================================================== 00:31:39.778 Controller Capabilities/Features 00:31:39.778 ================================ 00:31:39.778 Vendor ID: 8086 00:31:39.778 Subsystem Vendor ID: 8086 00:31:39.779 Serial Number: SPDK00000000000001 00:31:39.779 Model Number: SPDK bdev Controller 00:31:39.779 Firmware Version: 25.01 00:31:39.779 Recommended Arb Burst: 6 00:31:39.779 IEEE OUI Identifier: e4 d2 5c 00:31:39.779 Multi-path I/O 00:31:39.779 May have multiple subsystem ports: Yes 00:31:39.779 May have multiple controllers: Yes 00:31:39.779 Associated with SR-IOV VF: No 00:31:39.779 Max Data Transfer Size: 131072 00:31:39.779 Max Number of Namespaces: 32 00:31:39.779 Max Number of I/O Queues: 127 00:31:39.779 NVMe Specification Version (VS): 1.3 00:31:39.779 NVMe Specification Version (Identify): 1.3 00:31:39.779 Maximum Queue Entries: 128 00:31:39.779 Contiguous Queues Required: Yes 00:31:39.779 Arbitration Mechanisms Supported 00:31:39.779 Weighted Round Robin: Not Supported 00:31:39.779 Vendor Specific: Not Supported 00:31:39.779 Reset Timeout: 15000 ms 00:31:39.779 Doorbell Stride: 4 bytes 00:31:39.779 NVM Subsystem Reset: Not Supported 00:31:39.779 Command Sets Supported 00:31:39.779 NVM Command Set: Supported 00:31:39.779 Boot Partition: Not Supported 00:31:39.779 Memory Page Size Minimum: 4096 bytes 00:31:39.779 Memory Page Size Maximum: 4096 bytes 00:31:39.779 Persistent Memory Region: Not Supported 00:31:39.779 Optional Asynchronous Events Supported 00:31:39.779 Namespace Attribute Notices: Supported 00:31:39.779 Firmware Activation Notices: Not Supported 00:31:39.779 ANA Change Notices: Not Supported 00:31:39.779 PLE Aggregate Log Change Notices: Not Supported 00:31:39.779 LBA Status Info Alert Notices: Not Supported 00:31:39.779 EGE Aggregate Log Change Notices: Not Supported 00:31:39.779 Normal NVM Subsystem Shutdown event: Not Supported 00:31:39.779 Zone Descriptor Change Notices: Not Supported 00:31:39.779 Discovery Log Change Notices: Not Supported 00:31:39.779 Controller Attributes 00:31:39.779 128-bit Host Identifier: Supported 00:31:39.779 Non-Operational Permissive Mode: Not Supported 00:31:39.779 NVM Sets: Not Supported 00:31:39.779 Read Recovery Levels: Not Supported 00:31:39.779 Endurance Groups: Not Supported 00:31:39.779 Predictable Latency Mode: Not Supported 00:31:39.779 Traffic Based Keep ALive: Not Supported 00:31:39.779 Namespace Granularity: Not Supported 00:31:39.779 SQ Associations: Not Supported 00:31:39.779 UUID List: Not Supported 00:31:39.779 Multi-Domain Subsystem: Not Supported 00:31:39.779 Fixed Capacity Management: Not Supported 00:31:39.779 Variable Capacity Management: Not Supported 00:31:39.779 Delete Endurance Group: Not Supported 00:31:39.779 Delete NVM Set: Not Supported 00:31:39.779 Extended LBA Formats Supported: Not Supported 00:31:39.779 Flexible Data Placement Supported: Not Supported 00:31:39.779 00:31:39.779 Controller Memory Buffer Support 00:31:39.779 ================================ 00:31:39.779 Supported: No 00:31:39.779 00:31:39.779 Persistent Memory Region Support 00:31:39.779 ================================ 00:31:39.779 Supported: No 00:31:39.779 00:31:39.779 Admin Command Set Attributes 00:31:39.779 ============================ 00:31:39.779 Security Send/Receive: Not Supported 00:31:39.779 Format NVM: Not Supported 00:31:39.779 Firmware Activate/Download: Not Supported 00:31:39.779 Namespace Management: Not Supported 00:31:39.779 Device Self-Test: Not Supported 00:31:39.779 Directives: Not Supported 00:31:39.779 NVMe-MI: Not Supported 00:31:39.779 Virtualization Management: Not Supported 00:31:39.779 Doorbell Buffer Config: Not Supported 00:31:39.779 Get LBA Status Capability: Not Supported 00:31:39.779 Command & Feature Lockdown Capability: Not Supported 00:31:39.779 Abort Command Limit: 4 00:31:39.779 Async Event Request Limit: 4 00:31:39.779 Number of Firmware Slots: N/A 00:31:39.779 Firmware Slot 1 Read-Only: N/A 00:31:39.779 Firmware Activation Without Reset: N/A 00:31:39.779 Multiple Update Detection Support: N/A 00:31:39.779 Firmware Update Granularity: No Information Provided 00:31:39.779 Per-Namespace SMART Log: No 00:31:39.779 Asymmetric Namespace Access Log Page: Not Supported 00:31:39.779 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:39.779 Command Effects Log Page: Supported 00:31:39.779 Get Log Page Extended Data: Supported 00:31:39.779 Telemetry Log Pages: Not Supported 00:31:39.779 Persistent Event Log Pages: Not Supported 00:31:39.779 Supported Log Pages Log Page: May Support 00:31:39.779 Commands Supported & Effects Log Page: Not Supported 00:31:39.779 Feature Identifiers & Effects Log Page:May Support 00:31:39.779 NVMe-MI Commands & Effects Log Page: May Support 00:31:39.779 Data Area 4 for Telemetry Log: Not Supported 00:31:39.779 Error Log Page Entries Supported: 128 00:31:39.779 Keep Alive: Supported 00:31:39.779 Keep Alive Granularity: 10000 ms 00:31:39.779 00:31:39.779 NVM Command Set Attributes 00:31:39.779 ========================== 00:31:39.779 Submission Queue Entry Size 00:31:39.779 Max: 64 00:31:39.779 Min: 64 00:31:39.779 Completion Queue Entry Size 00:31:39.779 Max: 16 00:31:39.779 Min: 16 00:31:39.779 Number of Namespaces: 32 00:31:39.779 Compare Command: Supported 00:31:39.779 Write Uncorrectable Command: Not Supported 00:31:39.779 Dataset Management Command: Supported 00:31:39.779 Write Zeroes Command: Supported 00:31:39.779 Set Features Save Field: Not Supported 00:31:39.779 Reservations: Supported 00:31:39.779 Timestamp: Not Supported 00:31:39.779 Copy: Supported 00:31:39.779 Volatile Write Cache: Present 00:31:39.779 Atomic Write Unit (Normal): 1 00:31:39.779 Atomic Write Unit (PFail): 1 00:31:39.779 Atomic Compare & Write Unit: 1 00:31:39.779 Fused Compare & Write: Supported 00:31:39.779 Scatter-Gather List 00:31:39.779 SGL Command Set: Supported 00:31:39.779 SGL Keyed: Supported 00:31:39.779 SGL Bit Bucket Descriptor: Not Supported 00:31:39.779 SGL Metadata Pointer: Not Supported 00:31:39.779 Oversized SGL: Not Supported 00:31:39.779 SGL Metadata Address: Not Supported 00:31:39.779 SGL Offset: Supported 00:31:39.779 Transport SGL Data Block: Not Supported 00:31:39.779 Replay Protected Memory Block: Not Supported 00:31:39.779 00:31:39.779 Firmware Slot Information 00:31:39.779 ========================= 00:31:39.779 Active slot: 1 00:31:39.780 Slot 1 Firmware Revision: 25.01 00:31:39.780 00:31:39.780 00:31:39.780 Commands Supported and Effects 00:31:39.780 ============================== 00:31:39.780 Admin Commands 00:31:39.780 -------------- 00:31:39.780 Get Log Page (02h): Supported 00:31:39.780 Identify (06h): Supported 00:31:39.780 Abort (08h): Supported 00:31:39.780 Set Features (09h): Supported 00:31:39.780 Get Features (0Ah): Supported 00:31:39.780 Asynchronous Event Request (0Ch): Supported 00:31:39.780 Keep Alive (18h): Supported 00:31:39.780 I/O Commands 00:31:39.780 ------------ 00:31:39.780 Flush (00h): Supported LBA-Change 00:31:39.780 Write (01h): Supported LBA-Change 00:31:39.780 Read (02h): Supported 00:31:39.780 Compare (05h): Supported 00:31:39.780 Write Zeroes (08h): Supported LBA-Change 00:31:39.780 Dataset Management (09h): Supported LBA-Change 00:31:39.780 Copy (19h): Supported LBA-Change 00:31:39.780 00:31:39.780 Error Log 00:31:39.780 ========= 00:31:39.780 00:31:39.780 Arbitration 00:31:39.780 =========== 00:31:39.780 Arbitration Burst: 1 00:31:39.780 00:31:39.780 Power Management 00:31:39.780 ================ 00:31:39.780 Number of Power States: 1 00:31:39.780 Current Power State: Power State #0 00:31:39.780 Power State #0: 00:31:39.780 Max Power: 0.00 W 00:31:39.780 Non-Operational State: Operational 00:31:39.780 Entry Latency: Not Reported 00:31:39.780 Exit Latency: Not Reported 00:31:39.780 Relative Read Throughput: 0 00:31:39.780 Relative Read Latency: 0 00:31:39.780 Relative Write Throughput: 0 00:31:39.780 Relative Write Latency: 0 00:31:39.780 Idle Power: Not Reported 00:31:39.780 Active Power: Not Reported 00:31:39.780 Non-Operational Permissive Mode: Not Supported 00:31:39.780 00:31:39.780 Health Information 00:31:39.780 ================== 00:31:39.780 Critical Warnings: 00:31:39.780 Available Spare Space: OK 00:31:39.780 Temperature: OK 00:31:39.780 Device Reliability: OK 00:31:39.780 Read Only: No 00:31:39.780 Volatile Memory Backup: OK 00:31:39.780 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:39.780 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:39.780 Available Spare: 0% 00:31:39.780 Available Spare Threshold: 0% 00:31:39.780 Life Percentage [2024-12-05 14:02:39.402666] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.780 [2024-12-05 14:02:39.402695] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.780 [2024-12-05 14:02:39.402699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402703] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402725] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:39.780 [2024-12-05 14:02:39.402732] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 24814 doesn't match qid 00:31:39.780 [2024-12-05 14:02:39.402743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:509e84f0 sqhd:6a40 p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402747] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 24814 doesn't match qid 00:31:39.780 [2024-12-05 14:02:39.402753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:509e84f0 sqhd:6a40 p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402757] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 24814 doesn't match qid 00:31:39.780 [2024-12-05 14:02:39.402762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:509e84f0 sqhd:6a40 p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402766] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 24814 doesn't match qid 00:31:39.780 [2024-12-05 14:02:39.402771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32541 cdw0:509e84f0 sqhd:6a40 p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402777] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.780 [2024-12-05 14:02:39.402798] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.780 [2024-12-05 14:02:39.402802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402808] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.780 [2024-12-05 14:02:39.402817] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402833] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.780 [2024-12-05 14:02:39.402839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402843] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:39.780 [2024-12-05 14:02:39.402847] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:39.780 [2024-12-05 14:02:39.402850] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402857] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.780 [2024-12-05 14:02:39.402878] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.780 [2024-12-05 14:02:39.402882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402886] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402892] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.780 [2024-12-05 14:02:39.402914] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.780 [2024-12-05 14:02:39.402917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402922] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402928] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.780 [2024-12-05 14:02:39.402949] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.780 [2024-12-05 14:02:39.402953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:39.780 [2024-12-05 14:02:39.402958] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:31:39.780 [2024-12-05 14:02:39.402964] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.402970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.402992] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.402996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403000] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403006] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403029] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403038] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403045] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403072] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403081] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403087] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403111] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403120] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403126] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403157] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403165] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403171] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403199] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403207] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403213] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403239] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403247] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403254] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403280] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403289] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403296] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403322] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403330] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403336] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403358] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403366] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403372] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.781 [2024-12-05 14:02:39.403382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.781 [2024-12-05 14:02:39.403400] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.781 [2024-12-05 14:02:39.403404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:39.781 [2024-12-05 14:02:39.403408] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403415] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403433] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403441] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403447] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403470] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403478] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403484] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403505] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403514] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403520] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403543] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403551] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403557] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403584] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403592] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403598] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403618] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403626] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403632] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403654] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403662] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403669] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403695] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403703] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403710] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403732] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403741] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403747] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403771] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403779] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403785] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403804] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403812] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403819] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403838] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403846] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403852] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403870] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403878] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403884] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.782 [2024-12-05 14:02:39.403890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.782 [2024-12-05 14:02:39.403908] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.782 [2024-12-05 14:02:39.403912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:31:39.782 [2024-12-05 14:02:39.403916] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.403922] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.403928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.403943] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.403948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.403952] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.403958] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.403964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.403982] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.403986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.403990] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.403996] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404018] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404026] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404032] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404057] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404065] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404071] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404094] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404102] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404108] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404127] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404135] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404142] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404166] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404176] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404182] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404205] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404212] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404219] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404238] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404246] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404252] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404275] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404283] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404289] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404311] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404319] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404325] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404352] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404360] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404366] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404397] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404405] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404411] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.783 [2024-12-05 14:02:39.404440] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.783 [2024-12-05 14:02:39.404444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:39.783 [2024-12-05 14:02:39.404448] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404454] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.783 [2024-12-05 14:02:39.404459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404481] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404489] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404495] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404516] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404524] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404530] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404554] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404562] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404568] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404589] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404597] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404603] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404628] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404636] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404643] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404663] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404671] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404678] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404703] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404711] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404717] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404739] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404747] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404754] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404774] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404782] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404788] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404809] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404817] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404823] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404850] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404858] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404864] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404887] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404894] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404901] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404925] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404933] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404939] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.784 [2024-12-05 14:02:39.404959] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.784 [2024-12-05 14:02:39.404962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:31:39.784 [2024-12-05 14:02:39.404966] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:31:39.784 [2024-12-05 14:02:39.404973] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.404978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.404998] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405006] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405012] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405039] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405047] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405053] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405082] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405090] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405096] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405120] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405128] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405134] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405154] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405162] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405168] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405195] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405203] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405209] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405230] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405238] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405244] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405266] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405274] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405282] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405306] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405314] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405320] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.405345] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.405349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.405353] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405359] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.405365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.409379] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.409385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.409389] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.409395] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.409401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:39.785 [2024-12-05 14:02:39.409419] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:39.785 [2024-12-05 14:02:39.409423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001e p:0 m:0 dnr:0 00:31:39.785 [2024-12-05 14:02:39.409427] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:31:39.785 [2024-12-05 14:02:39.409432] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:31:39.785 Used: 0% 00:31:39.785 Data Units Read: 0 00:31:39.785 Data Units Written: 0 00:31:39.785 Host Read Commands: 0 00:31:39.785 Host Write Commands: 0 00:31:39.785 Controller Busy Time: 0 minutes 00:31:39.785 Power Cycles: 0 00:31:39.785 Power On Hours: 0 hours 00:31:39.785 Unsafe Shutdowns: 0 00:31:39.785 Unrecoverable Media Errors: 0 00:31:39.785 Lifetime Error Log Entries: 0 00:31:39.785 Warning Temperature Time: 0 minutes 00:31:39.785 Critical Temperature Time: 0 minutes 00:31:39.785 00:31:39.785 Number of Queues 00:31:39.785 ================ 00:31:39.785 Number of I/O Submission Queues: 127 00:31:39.785 Number of I/O Completion Queues: 127 00:31:39.785 00:31:39.785 Active Namespaces 00:31:39.785 ================= 00:31:39.785 Namespace ID:1 00:31:39.786 Error Recovery Timeout: Unlimited 00:31:39.786 Command Set Identifier: NVM (00h) 00:31:39.786 Deallocate: Supported 00:31:39.786 Deallocated/Unwritten Error: Not Supported 00:31:39.786 Deallocated Read Value: Unknown 00:31:39.786 Deallocate in Write Zeroes: Not Supported 00:31:39.786 Deallocated Guard Field: 0xFFFF 00:31:39.786 Flush: Supported 00:31:39.786 Reservation: Supported 00:31:39.786 Namespace Sharing Capabilities: Multiple Controllers 00:31:39.786 Size (in LBAs): 131072 (0GiB) 00:31:39.786 Capacity (in LBAs): 131072 (0GiB) 00:31:39.786 Utilization (in LBAs): 131072 (0GiB) 00:31:39.786 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:39.786 EUI64: ABCDEF0123456789 00:31:39.786 UUID: c353aa0b-361a-4b11-83f1-aa008741e46a 00:31:39.786 Thin Provisioning: Not Supported 00:31:39.786 Per-NS Atomic Units: Yes 00:31:39.786 Atomic Boundary Size (Normal): 0 00:31:39.786 Atomic Boundary Size (PFail): 0 00:31:39.786 Atomic Boundary Offset: 0 00:31:39.786 Maximum Single Source Range Length: 65535 00:31:39.786 Maximum Copy Length: 65535 00:31:39.786 Maximum Source Range Count: 1 00:31:39.786 NGUID/EUI64 Never Reused: No 00:31:39.786 Namespace Write Protected: No 00:31:39.786 Number of LBA Formats: 1 00:31:39.786 Current LBA Format: LBA Format #00 00:31:39.786 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:39.786 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:39.786 rmmod nvme_rdma 00:31:39.786 rmmod nvme_fabrics 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1871298 ']' 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1871298 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1871298 ']' 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1871298 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1871298 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1871298' 00:31:39.786 killing process with pid 1871298 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1871298 00:31:39.786 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1871298 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:40.046 00:31:40.046 real 0m7.403s 00:31:40.046 user 0m5.736s 00:31:40.046 sys 0m4.987s 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:40.046 ************************************ 00:31:40.046 END TEST nvmf_identify 00:31:40.046 ************************************ 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.046 ************************************ 00:31:40.046 START TEST nvmf_perf 00:31:40.046 ************************************ 00:31:40.046 14:02:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:31:40.306 * Looking for test storage... 00:31:40.306 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:40.306 14:02:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:40.306 14:02:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:40.306 14:02:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.306 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:40.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.307 --rc genhtml_branch_coverage=1 00:31:40.307 --rc genhtml_function_coverage=1 00:31:40.307 --rc genhtml_legend=1 00:31:40.307 --rc geninfo_all_blocks=1 00:31:40.307 --rc geninfo_unexecuted_blocks=1 00:31:40.307 00:31:40.307 ' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:40.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.307 --rc genhtml_branch_coverage=1 00:31:40.307 --rc genhtml_function_coverage=1 00:31:40.307 --rc genhtml_legend=1 00:31:40.307 --rc geninfo_all_blocks=1 00:31:40.307 --rc geninfo_unexecuted_blocks=1 00:31:40.307 00:31:40.307 ' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:40.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.307 --rc genhtml_branch_coverage=1 00:31:40.307 --rc genhtml_function_coverage=1 00:31:40.307 --rc genhtml_legend=1 00:31:40.307 --rc geninfo_all_blocks=1 00:31:40.307 --rc geninfo_unexecuted_blocks=1 00:31:40.307 00:31:40.307 ' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:40.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.307 --rc genhtml_branch_coverage=1 00:31:40.307 --rc genhtml_function_coverage=1 00:31:40.307 --rc genhtml_legend=1 00:31:40.307 --rc geninfo_all_blocks=1 00:31:40.307 --rc geninfo_unexecuted_blocks=1 00:31:40.307 00:31:40.307 ' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:40.307 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.307 14:02:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:46.874 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:46.874 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:46.874 Found net devices under 0000:18:00.0: mlx_0_0 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:46.874 Found net devices under 0000:18:00.1: mlx_0_1 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:46.874 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:46.875 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:46.875 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:31:46.875 altname enp24s0f0np0 00:31:46.875 altname ens785f0np0 00:31:46.875 inet 192.168.100.8/24 scope global mlx_0_0 00:31:46.875 valid_lft forever preferred_lft forever 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:46.875 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:46.875 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:31:46.875 altname enp24s0f1np1 00:31:46.875 altname ens785f1np1 00:31:46.875 inet 192.168.100.9/24 scope global mlx_0_1 00:31:46.875 valid_lft forever preferred_lft forever 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:46.875 14:02:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:46.875 192.168.100.9' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:46.875 192.168.100.9' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:46.875 192.168.100.9' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1874826 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1874826 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1874826 ']' 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:46.875 [2024-12-05 14:02:46.131125] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:46.875 [2024-12-05 14:02:46.131172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.875 [2024-12-05 14:02:46.209726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.875 [2024-12-05 14:02:46.232964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.875 [2024-12-05 14:02:46.233002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.875 [2024-12-05 14:02:46.233008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.875 [2024-12-05 14:02:46.233014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.875 [2024-12-05 14:02:46.233019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.875 [2024-12-05 14:02:46.234218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.875 [2024-12-05 14:02:46.234249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.875 [2024-12-05 14:02:46.234358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:46.875 [2024-12-05 14:02:46.234357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:46.875 14:02:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:31:50.161 14:02:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:31:50.161 [2024-12-05 14:02:49.952754] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:31:50.161 [2024-12-05 14:02:49.971360] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21f1bd0/0x20c7d90) succeed. 00:31:50.161 [2024-12-05 14:02:49.979807] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21f30d0/0x2147a40) succeed. 00:31:50.420 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:50.420 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:50.420 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:50.679 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:50.679 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:50.937 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:50.937 [2024-12-05 14:02:50.776974] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:51.197 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:51.197 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:31:51.197 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:31:51.197 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:51.197 14:02:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:31:52.576 Initializing NVMe Controllers 00:31:52.576 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:31:52.576 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:31:52.576 Initialization complete. Launching workers. 00:31:52.576 ======================================================== 00:31:52.576 Latency(us) 00:31:52.576 Device Information : IOPS MiB/s Average min max 00:31:52.576 PCIE (0000:d8:00.0) NSID 1 from core 0: 107281.73 419.07 297.72 27.95 5212.81 00:31:52.576 ======================================================== 00:31:52.576 Total : 107281.73 419.07 297.72 27.95 5212.81 00:31:52.576 00:31:52.576 14:02:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:55.867 Initializing NVMe Controllers 00:31:55.867 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:55.867 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:55.867 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:55.867 Initialization complete. Launching workers. 00:31:55.867 ======================================================== 00:31:55.867 Latency(us) 00:31:55.867 Device Information : IOPS MiB/s Average min max 00:31:55.867 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7095.00 27.71 140.74 47.40 4080.17 00:31:55.867 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5473.00 21.38 182.51 68.09 4099.66 00:31:55.867 ======================================================== 00:31:55.867 Total : 12568.00 49.09 158.93 47.40 4099.66 00:31:55.867 00:31:55.867 14:02:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:59.157 Initializing NVMe Controllers 00:31:59.157 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.157 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:59.157 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:59.157 Initialization complete. Launching workers. 00:31:59.157 ======================================================== 00:31:59.157 Latency(us) 00:31:59.157 Device Information : IOPS MiB/s Average min max 00:31:59.157 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19439.36 75.93 1646.36 437.97 5365.29 00:31:59.157 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.87 15.75 7979.59 7762.17 8201.57 00:31:59.157 ======================================================== 00:31:59.157 Total : 23471.23 91.68 2734.28 437.97 8201.57 00:31:59.157 00:31:59.157 14:02:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:31:59.157 14:02:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:04.434 Initializing NVMe Controllers 00:32:04.434 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.434 Controller IO queue size 128, less than required. 00:32:04.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.434 Controller IO queue size 128, less than required. 00:32:04.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.434 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:04.434 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:04.434 Initialization complete. Launching workers. 00:32:04.434 ======================================================== 00:32:04.434 Latency(us) 00:32:04.434 Device Information : IOPS MiB/s Average min max 00:32:04.434 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4180.50 1045.13 30766.26 14280.38 70612.13 00:32:04.434 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4202.49 1050.62 30250.12 14017.57 47832.30 00:32:04.434 ======================================================== 00:32:04.434 Total : 8382.99 2095.75 30507.51 14017.57 70612.13 00:32:04.434 00:32:04.434 14:03:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:32:04.434 No valid NVMe controllers or AIO or URING devices found 00:32:04.434 Initializing NVMe Controllers 00:32:04.434 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.434 Controller IO queue size 128, less than required. 00:32:04.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.434 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:04.434 Controller IO queue size 128, less than required. 00:32:04.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.434 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:04.434 WARNING: Some requested NVMe devices were skipped 00:32:04.434 14:03:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:32:08.646 Initializing NVMe Controllers 00:32:08.646 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:08.646 Controller IO queue size 128, less than required. 00:32:08.646 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:08.646 Controller IO queue size 128, less than required. 00:32:08.646 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:08.646 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:08.646 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:08.646 Initialization complete. Launching workers. 00:32:08.646 00:32:08.646 ==================== 00:32:08.646 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:08.646 RDMA transport: 00:32:08.646 dev name: mlx5_0 00:32:08.646 polls: 429306 00:32:08.646 idle_polls: 425229 00:32:08.646 completions: 47490 00:32:08.646 queued_requests: 1 00:32:08.646 total_send_wrs: 23745 00:32:08.646 send_doorbell_updates: 3830 00:32:08.646 total_recv_wrs: 23872 00:32:08.646 recv_doorbell_updates: 3832 00:32:08.646 --------------------------------- 00:32:08.646 00:32:08.646 ==================== 00:32:08.646 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:08.646 RDMA transport: 00:32:08.646 dev name: mlx5_0 00:32:08.646 polls: 433458 00:32:08.646 idle_polls: 433185 00:32:08.646 completions: 21010 00:32:08.646 queued_requests: 1 00:32:08.646 total_send_wrs: 10505 00:32:08.646 send_doorbell_updates: 252 00:32:08.646 total_recv_wrs: 10632 00:32:08.646 recv_doorbell_updates: 254 00:32:08.646 --------------------------------- 00:32:08.646 ======================================================== 00:32:08.646 Latency(us) 00:32:08.646 Device Information : IOPS MiB/s Average min max 00:32:08.647 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5933.67 1483.42 21610.03 11218.03 51135.90 00:32:08.647 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2624.97 656.24 48946.05 27866.81 77523.82 00:32:08.647 ======================================================== 00:32:08.647 Total : 8558.64 2139.66 29994.09 11218.03 77523.82 00:32:08.647 00:32:08.647 14:03:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:08.647 14:03:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:08.647 14:03:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:08.647 14:03:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:32:08.647 14:03:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=a5eca5dd-22e4-4ef7-80a6-e4e7b76bbe29 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb a5eca5dd-22e4-4ef7-80a6-e4e7b76bbe29 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=a5eca5dd-22e4-4ef7-80a6-e4e7b76bbe29 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:20.979 { 00:32:20.979 "uuid": "a5eca5dd-22e4-4ef7-80a6-e4e7b76bbe29", 00:32:20.979 "name": "lvs_0", 00:32:20.979 "base_bdev": "Nvme0n1", 00:32:20.979 "total_data_clusters": 952929, 00:32:20.979 "free_clusters": 952929, 00:32:20.979 "block_size": 512, 00:32:20.979 "cluster_size": 4194304 00:32:20.979 } 00:32:20.979 ]' 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a5eca5dd-22e4-4ef7-80a6-e4e7b76bbe29") .free_clusters' 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=952929 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a5eca5dd-22e4-4ef7-80a6-e4e7b76bbe29") .cluster_size' 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=3811716 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 3811716 00:32:20.979 3811716 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 3811716 -gt 20480 ']' 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:20.979 14:03:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5eca5dd-22e4-4ef7-80a6-e4e7b76bbe29 lbd_0 20480 00:32:21.547 14:03:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=3474ff6a-030d-4b71-af92-5792128171db 00:32:21.547 14:03:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3474ff6a-030d-4b71-af92-5792128171db lvs_n_0 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=99699c1a-bcf8-469e-96c1-2c35cdcbe1e6 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 99699c1a-bcf8-469e-96c1-2c35cdcbe1e6 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=99699c1a-bcf8-469e-96c1-2c35cdcbe1e6 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:24.080 { 00:32:24.080 "uuid": "a5eca5dd-22e4-4ef7-80a6-e4e7b76bbe29", 00:32:24.080 "name": "lvs_0", 00:32:24.080 "base_bdev": "Nvme0n1", 00:32:24.080 "total_data_clusters": 952929, 00:32:24.080 "free_clusters": 947809, 00:32:24.080 "block_size": 512, 00:32:24.080 "cluster_size": 4194304 00:32:24.080 }, 00:32:24.080 { 00:32:24.080 "uuid": "99699c1a-bcf8-469e-96c1-2c35cdcbe1e6", 00:32:24.080 "name": "lvs_n_0", 00:32:24.080 "base_bdev": "3474ff6a-030d-4b71-af92-5792128171db", 00:32:24.080 "total_data_clusters": 5114, 00:32:24.080 "free_clusters": 5114, 00:32:24.080 "block_size": 512, 00:32:24.080 "cluster_size": 4194304 00:32:24.080 } 00:32:24.080 ]' 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="99699c1a-bcf8-469e-96c1-2c35cdcbe1e6") .free_clusters' 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="99699c1a-bcf8-469e-96c1-2c35cdcbe1e6") .cluster_size' 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:32:24.080 20456 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:24.080 14:03:23 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99699c1a-bcf8-469e-96c1-2c35cdcbe1e6 lbd_nest_0 20456 00:32:24.338 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5625faeb-23ce-4da5-9800-95c0a179902f 00:32:24.338 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.596 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:24.596 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5625faeb-23ce-4da5-9800-95c0a179902f 00:32:24.596 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:24.854 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:24.854 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:24.854 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:24.854 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:24.854 14:03:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:37.114 Initializing NVMe Controllers 00:32:37.114 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:37.114 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:37.114 Initialization complete. Launching workers. 00:32:37.114 ======================================================== 00:32:37.114 Latency(us) 00:32:37.114 Device Information : IOPS MiB/s Average min max 00:32:37.114 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6138.82 3.00 162.56 65.15 8035.09 00:32:37.114 ======================================================== 00:32:37.114 Total : 6138.82 3.00 162.56 65.15 8035.09 00:32:37.114 00:32:37.114 14:03:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:37.114 14:03:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:49.325 Initializing NVMe Controllers 00:32:49.325 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:49.325 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:49.325 Initialization complete. Launching workers. 00:32:49.325 ======================================================== 00:32:49.325 Latency(us) 00:32:49.325 Device Information : IOPS MiB/s Average min max 00:32:49.325 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2699.20 337.40 370.13 152.83 7104.68 00:32:49.325 ======================================================== 00:32:49.325 Total : 2699.20 337.40 370.13 152.83 7104.68 00:32:49.325 00:32:49.325 14:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:49.325 14:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:49.325 14:03:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:59.304 Initializing NVMe Controllers 00:32:59.304 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:59.304 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:59.304 Initialization complete. Launching workers. 00:32:59.304 ======================================================== 00:32:59.304 Latency(us) 00:32:59.304 Device Information : IOPS MiB/s Average min max 00:32:59.304 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12236.31 5.97 2615.17 974.66 8053.65 00:32:59.304 ======================================================== 00:32:59.304 Total : 12236.31 5.97 2615.17 974.66 8053.65 00:32:59.304 00:32:59.304 14:03:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:59.304 14:03:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:11.509 Initializing NVMe Controllers 00:33:11.509 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:11.509 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:11.509 Initialization complete. Launching workers. 00:33:11.509 ======================================================== 00:33:11.509 Latency(us) 00:33:11.509 Device Information : IOPS MiB/s Average min max 00:33:11.509 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4013.10 501.64 7978.23 2889.14 11021.33 00:33:11.509 ======================================================== 00:33:11.509 Total : 4013.10 501.64 7978.23 2889.14 11021.33 00:33:11.509 00:33:11.509 14:04:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:11.509 14:04:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:11.509 14:04:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:23.719 Initializing NVMe Controllers 00:33:23.719 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:23.719 Controller IO queue size 128, less than required. 00:33:23.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:23.719 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:23.719 Initialization complete. Launching workers. 00:33:23.719 ======================================================== 00:33:23.719 Latency(us) 00:33:23.719 Device Information : IOPS MiB/s Average min max 00:33:23.719 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19509.80 9.53 6562.96 1716.43 16587.20 00:33:23.719 ======================================================== 00:33:23.719 Total : 19509.80 9.53 6562.96 1716.43 16587.20 00:33:23.719 00:33:23.719 14:04:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:23.719 14:04:21 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:33.694 Initializing NVMe Controllers 00:33:33.694 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:33.694 Controller IO queue size 128, less than required. 00:33:33.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:33.694 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:33.694 Initialization complete. Launching workers. 00:33:33.694 ======================================================== 00:33:33.694 Latency(us) 00:33:33.694 Device Information : IOPS MiB/s Average min max 00:33:33.694 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11483.34 1435.42 11143.52 3325.72 23688.03 00:33:33.694 ======================================================== 00:33:33.694 Total : 11483.34 1435.42 11143.52 3325.72 23688.03 00:33:33.694 00:33:33.694 14:04:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.694 14:04:32 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5625faeb-23ce-4da5-9800-95c0a179902f 00:33:33.952 14:04:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:33.952 14:04:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3474ff6a-030d-4b71-af92-5792128171db 00:33:34.231 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:34.490 rmmod nvme_rdma 00:33:34.490 rmmod nvme_fabrics 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1874826 ']' 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1874826 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1874826 ']' 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1874826 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1874826 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1874826' 00:33:34.490 killing process with pid 1874826 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1874826 00:33:34.490 14:04:34 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1874826 00:33:38.679 14:04:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.679 14:04:38 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:38.679 00:33:38.679 real 1m58.269s 00:33:38.679 user 7m31.131s 00:33:38.679 sys 0m6.212s 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:38.680 ************************************ 00:33:38.680 END TEST nvmf_perf 00:33:38.680 ************************************ 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.680 ************************************ 00:33:38.680 START TEST nvmf_fio_host 00:33:38.680 ************************************ 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:33:38.680 * Looking for test storage... 00:33:38.680 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:38.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.680 --rc genhtml_branch_coverage=1 00:33:38.680 --rc genhtml_function_coverage=1 00:33:38.680 --rc genhtml_legend=1 00:33:38.680 --rc geninfo_all_blocks=1 00:33:38.680 --rc geninfo_unexecuted_blocks=1 00:33:38.680 00:33:38.680 ' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:38.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.680 --rc genhtml_branch_coverage=1 00:33:38.680 --rc genhtml_function_coverage=1 00:33:38.680 --rc genhtml_legend=1 00:33:38.680 --rc geninfo_all_blocks=1 00:33:38.680 --rc geninfo_unexecuted_blocks=1 00:33:38.680 00:33:38.680 ' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:38.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.680 --rc genhtml_branch_coverage=1 00:33:38.680 --rc genhtml_function_coverage=1 00:33:38.680 --rc genhtml_legend=1 00:33:38.680 --rc geninfo_all_blocks=1 00:33:38.680 --rc geninfo_unexecuted_blocks=1 00:33:38.680 00:33:38.680 ' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:38.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.680 --rc genhtml_branch_coverage=1 00:33:38.680 --rc genhtml_function_coverage=1 00:33:38.680 --rc genhtml_legend=1 00:33:38.680 --rc geninfo_all_blocks=1 00:33:38.680 --rc geninfo_unexecuted_blocks=1 00:33:38.680 00:33:38.680 ' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.680 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.681 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.681 14:04:38 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:45.254 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:33:45.255 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:33:45.255 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:33:45.255 Found net devices under 0000:18:00.0: mlx_0_0 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:33:45.255 Found net devices under 0000:18:00.1: mlx_0_1 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:45.255 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:45.255 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:33:45.255 altname enp24s0f0np0 00:33:45.255 altname ens785f0np0 00:33:45.255 inet 192.168.100.8/24 scope global mlx_0_0 00:33:45.255 valid_lft forever preferred_lft forever 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:45.255 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:45.255 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:45.255 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:33:45.255 altname enp24s0f1np1 00:33:45.256 altname ens785f1np1 00:33:45.256 inet 192.168.100.9/24 scope global mlx_0_1 00:33:45.256 valid_lft forever preferred_lft forever 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:45.256 192.168.100.9' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:45.256 192.168.100.9' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:45.256 192.168.100.9' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1898243 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1898243 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1898243 ']' 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.256 [2024-12-05 14:04:44.490094] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:33:45.256 [2024-12-05 14:04:44.490152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.256 [2024-12-05 14:04:44.565703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:45.256 [2024-12-05 14:04:44.588478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.256 [2024-12-05 14:04:44.588520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.256 [2024-12-05 14:04:44.588527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.256 [2024-12-05 14:04:44.588533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.256 [2024-12-05 14:04:44.588537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.256 [2024-12-05 14:04:44.589969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.256 [2024-12-05 14:04:44.590081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:45.256 [2024-12-05 14:04:44.590162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.256 [2024-12-05 14:04:44.590164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:33:45.256 14:04:44 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:45.256 [2024-12-05 14:04:44.857201] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bf0f30/0x1bf5420) succeed. 00:33:45.256 [2024-12-05 14:04:44.865362] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bf25c0/0x1c36ac0) succeed. 00:33:45.256 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:45.256 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:45.256 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.256 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:45.515 Malloc1 00:33:45.515 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:45.773 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:45.773 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:46.032 [2024-12-05 14:04:45.740061] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:46.032 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.290 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:46.291 14:04:45 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:46.291 14:04:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:46.291 14:04:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:46.600 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:46.600 fio-3.35 00:33:46.600 Starting 1 thread 00:33:49.188 00:33:49.188 test: (groupid=0, jobs=1): err= 0: pid=1898809: Thu Dec 5 14:04:48 2024 00:33:49.188 read: IOPS=18.9k, BW=73.6MiB/s (77.2MB/s)(148MiB/2003msec) 00:33:49.188 slat (nsec): min=1276, max=32471, avg=1392.06, stdev=450.62 00:33:49.188 clat (usec): min=1918, max=6143, avg=3368.42, stdev=74.60 00:33:49.188 lat (usec): min=1940, max=6144, avg=3369.82, stdev=74.55 00:33:49.188 clat percentiles (usec): 00:33:49.188 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:33:49.188 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:33:49.188 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:33:49.188 | 99.00th=[ 3392], 99.50th=[ 3425], 99.90th=[ 4424], 99.95th=[ 5342], 00:33:49.188 | 99.99th=[ 6128] 00:33:49.188 bw ( KiB/s): min=73728, max=76072, per=99.98%, avg=75394.00, stdev=1114.35, samples=4 00:33:49.188 iops : min=18432, max=19018, avg=18848.50, stdev=278.59, samples=4 00:33:49.188 write: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(148MiB/2003msec); 0 zone resets 00:33:49.188 slat (nsec): min=1311, max=24082, avg=1694.64, stdev=506.10 00:33:49.188 clat (usec): min=2502, max=6154, avg=3366.58, stdev=71.24 00:33:49.188 lat (usec): min=2507, max=6155, avg=3368.27, stdev=71.19 00:33:49.188 clat percentiles (usec): 00:33:49.188 | 1.00th=[ 3326], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3359], 00:33:49.188 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:33:49.188 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:33:49.188 | 99.00th=[ 3392], 99.50th=[ 3458], 99.90th=[ 4080], 99.95th=[ 5276], 00:33:49.188 | 99.99th=[ 6128] 00:33:49.188 bw ( KiB/s): min=73696, max=76112, per=99.99%, avg=75448.00, stdev=1169.66, samples=4 00:33:49.188 iops : min=18424, max=19028, avg=18862.00, stdev=292.42, samples=4 00:33:49.188 lat (msec) : 2=0.01%, 4=99.87%, 10=0.13% 00:33:49.188 cpu : usr=99.60%, sys=0.05%, ctx=15, majf=0, minf=4 00:33:49.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:49.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:49.188 issued rwts: total=37762,37786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:49.188 00:33:49.188 Run status group 0 (all jobs): 00:33:49.188 READ: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=148MiB (155MB), run=2003-2003msec 00:33:49.188 WRITE: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=148MiB (155MB), run=2003-2003msec 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:49.188 14:04:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:49.446 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:49.446 fio-3.35 00:33:49.446 Starting 1 thread 00:33:51.974 00:33:51.974 test: (groupid=0, jobs=1): err= 0: pid=1899418: Thu Dec 5 14:04:51 2024 00:33:51.974 read: IOPS=15.2k, BW=237MiB/s (249MB/s)(465MiB/1961msec) 00:33:51.974 slat (nsec): min=2119, max=47968, avg=2406.17, stdev=992.25 00:33:51.974 clat (usec): min=437, max=7545, avg=1562.85, stdev=1269.47 00:33:51.974 lat (usec): min=439, max=7564, avg=1565.26, stdev=1269.76 00:33:51.974 clat percentiles (usec): 00:33:51.974 | 1.00th=[ 635], 5.00th=[ 725], 10.00th=[ 783], 20.00th=[ 865], 00:33:51.974 | 30.00th=[ 922], 40.00th=[ 1004], 50.00th=[ 1106], 60.00th=[ 1221], 00:33:51.974 | 70.00th=[ 1352], 80.00th=[ 1532], 90.00th=[ 4424], 95.00th=[ 4686], 00:33:51.974 | 99.00th=[ 6063], 99.50th=[ 6521], 99.90th=[ 7046], 99.95th=[ 7111], 00:33:51.974 | 99.99th=[ 7504] 00:33:51.974 bw ( KiB/s): min=105536, max=125248, per=48.28%, avg=117224.00, stdev=8351.75, samples=4 00:33:51.974 iops : min= 6596, max= 7828, avg=7326.50, stdev=521.98, samples=4 00:33:51.974 write: IOPS=8621, BW=135MiB/s (141MB/s)(238MiB/1768msec); 0 zone resets 00:33:51.974 slat (usec): min=24, max=135, avg=27.75, stdev= 4.87 00:33:51.974 clat (usec): min=3983, max=18939, avg=11887.32, stdev=1771.98 00:33:51.974 lat (usec): min=4009, max=18965, avg=11915.07, stdev=1771.70 00:33:51.974 clat percentiles (usec): 00:33:51.974 | 1.00th=[ 6718], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10552], 00:33:51.974 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:33:51.974 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13960], 95.00th=[14615], 00:33:51.974 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18220], 99.95th=[18482], 00:33:51.974 | 99.99th=[19006] 00:33:51.974 bw ( KiB/s): min=109152, max=131200, per=87.77%, avg=121080.00, stdev=9254.60, samples=4 00:33:51.974 iops : min= 6822, max= 8200, avg=7567.50, stdev=578.41, samples=4 00:33:51.974 lat (usec) : 500=0.02%, 750=4.41%, 1000=21.86% 00:33:51.974 lat (msec) : 2=30.68%, 4=2.16%, 10=11.29%, 20=29.58% 00:33:51.974 cpu : usr=96.31%, sys=1.70%, ctx=226, majf=0, minf=4 00:33:51.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:51.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.974 issued rwts: total=29759,15243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.974 00:33:51.974 Run status group 0 (all jobs): 00:33:51.974 READ: bw=237MiB/s (249MB/s), 237MiB/s-237MiB/s (249MB/s-249MB/s), io=465MiB (488MB), run=1961-1961msec 00:33:51.974 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=238MiB (250MB), run=1768-1768msec 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:33:51.974 14:04:51 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:33:55.260 Nvme0n1 00:33:55.260 14:04:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=da6bb640-1181-4bc5-a337-800488bec807 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb da6bb640-1181-4bc5-a337-800488bec807 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=da6bb640-1181-4bc5-a337-800488bec807 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:34:07.462 { 00:34:07.462 "uuid": "da6bb640-1181-4bc5-a337-800488bec807", 00:34:07.462 "name": "lvs_0", 00:34:07.462 "base_bdev": "Nvme0n1", 00:34:07.462 "total_data_clusters": 3725, 00:34:07.462 "free_clusters": 3725, 00:34:07.462 "block_size": 512, 00:34:07.462 "cluster_size": 1073741824 00:34:07.462 } 00:34:07.462 ]' 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="da6bb640-1181-4bc5-a337-800488bec807") .free_clusters' 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=3725 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="da6bb640-1181-4bc5-a337-800488bec807") .cluster_size' 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=3814400 00:34:07.462 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 3814400 00:34:07.462 3814400 00:34:07.463 14:05:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 3814400 00:34:07.463 20a1a6c8-ed4e-4882-89c7-3b344f820e93 00:34:07.463 14:05:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:34:07.463 14:05:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:34:07.463 14:05:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:07.463 14:05:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:07.722 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:07.722 fio-3.35 00:34:07.722 Starting 1 thread 00:34:10.256 00:34:10.256 test: (groupid=0, jobs=1): err= 0: pid=1902767: Thu Dec 5 14:05:09 2024 00:34:10.256 read: IOPS=6792, BW=26.5MiB/s (27.8MB/s)(53.2MiB/2005msec) 00:34:10.256 slat (nsec): min=1325, max=22188, avg=1425.33, stdev=324.58 00:34:10.256 clat (usec): min=162, max=885728, avg=9402.30, stdev=61016.65 00:34:10.256 lat (usec): min=163, max=885734, avg=9403.72, stdev=61016.72 00:34:10.256 clat percentiles (msec): 00:34:10.256 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:34:10.256 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:34:10.256 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:34:10.256 | 99.00th=[ 6], 99.50th=[ 9], 99.90th=[ 885], 99.95th=[ 885], 00:34:10.256 | 99.99th=[ 885] 00:34:10.256 bw ( KiB/s): min= 384, max=49568, per=99.78%, avg=27108.00, stdev=26142.38, samples=4 00:34:10.256 iops : min= 96, max=12392, avg=6777.00, stdev=6535.59, samples=4 00:34:10.256 write: IOPS=6787, BW=26.5MiB/s (27.8MB/s)(53.2MiB/2005msec); 0 zone resets 00:34:10.256 slat (nsec): min=1374, max=22822, avg=1757.53, stdev=307.80 00:34:10.256 clat (usec): min=322, max=886127, avg=9184.44, stdev=59166.97 00:34:10.256 lat (usec): min=324, max=886130, avg=9186.20, stdev=59167.03 00:34:10.256 clat percentiles (msec): 00:34:10.256 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:34:10.256 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:34:10.256 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:34:10.256 | 99.00th=[ 6], 99.50th=[ 9], 99.90th=[ 885], 99.95th=[ 885], 00:34:10.256 | 99.99th=[ 885] 00:34:10.256 bw ( KiB/s): min= 416, max=49560, per=99.79%, avg=27090.00, stdev=25933.11, samples=4 00:34:10.256 iops : min= 104, max=12390, avg=6772.50, stdev=6483.28, samples=4 00:34:10.256 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:34:10.256 lat (msec) : 2=0.04%, 4=0.40%, 10=99.03%, 1000=0.47% 00:34:10.256 cpu : usr=99.60%, sys=0.05%, ctx=16, majf=0, minf=4 00:34:10.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:10.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:10.256 issued rwts: total=13618,13608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:10.256 00:34:10.256 Run status group 0 (all jobs): 00:34:10.256 READ: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=53.2MiB (55.8MB), run=2005-2005msec 00:34:10.256 WRITE: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=53.2MiB (55.7MB), run=2005-2005msec 00:34:10.256 14:05:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:10.256 14:05:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a75aef39-572e-4059-aa19-9ab3fedbec83 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a75aef39-572e-4059-aa19-9ab3fedbec83 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=a75aef39-572e-4059-aa19-9ab3fedbec83 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:34:12.786 { 00:34:12.786 "uuid": "da6bb640-1181-4bc5-a337-800488bec807", 00:34:12.786 "name": "lvs_0", 00:34:12.786 "base_bdev": "Nvme0n1", 00:34:12.786 "total_data_clusters": 3725, 00:34:12.786 "free_clusters": 0, 00:34:12.786 "block_size": 512, 00:34:12.786 "cluster_size": 1073741824 00:34:12.786 }, 00:34:12.786 { 00:34:12.786 "uuid": "a75aef39-572e-4059-aa19-9ab3fedbec83", 00:34:12.786 "name": "lvs_n_0", 00:34:12.786 "base_bdev": "20a1a6c8-ed4e-4882-89c7-3b344f820e93", 00:34:12.786 "total_data_clusters": 952668, 00:34:12.786 "free_clusters": 952668, 00:34:12.786 "block_size": 512, 00:34:12.786 "cluster_size": 4194304 00:34:12.786 } 00:34:12.786 ]' 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a75aef39-572e-4059-aa19-9ab3fedbec83") .free_clusters' 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=952668 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a75aef39-572e-4059-aa19-9ab3fedbec83") .cluster_size' 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=3810672 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 3810672 00:34:12.786 3810672 00:34:12.786 14:05:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 3810672 00:34:14.683 d3837101-b3e7-4d93-9a66-0b2dca0c0e0b 00:34:14.683 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:14.683 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:14.683 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:14.941 14:05:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:34:15.199 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:15.199 fio-3.35 00:34:15.199 Starting 1 thread 00:34:17.733 00:34:17.733 test: (groupid=0, jobs=1): err= 0: pid=1904212: Thu Dec 5 14:05:17 2024 00:34:17.733 read: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(83.1MiB/2005msec) 00:34:17.733 slat (nsec): min=1283, max=102770, avg=1380.56, stdev=739.16 00:34:17.733 clat (usec): min=2589, max=10845, avg=5935.36, stdev=172.99 00:34:17.733 lat (usec): min=2592, max=10846, avg=5936.74, stdev=172.95 00:34:17.733 clat percentiles (usec): 00:34:17.733 | 1.00th=[ 5866], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5932], 00:34:17.733 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:34:17.733 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5997], 95.00th=[ 5997], 00:34:17.733 | 99.00th=[ 6063], 99.50th=[ 6128], 99.90th=[ 8356], 99.95th=[ 9765], 00:34:17.733 | 99.99th=[10814] 00:34:17.733 bw ( KiB/s): min=40360, max=43320, per=99.97%, avg=42446.00, stdev=1407.90, samples=4 00:34:17.733 iops : min=10090, max=10830, avg=10611.50, stdev=351.97, samples=4 00:34:17.733 write: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(83.1MiB/2005msec); 0 zone resets 00:34:17.733 slat (nsec): min=1310, max=13543, avg=1715.41, stdev=263.46 00:34:17.733 clat (usec): min=2575, max=10863, avg=5955.00, stdev=168.44 00:34:17.733 lat (usec): min=2579, max=10865, avg=5956.72, stdev=168.42 00:34:17.733 clat percentiles (usec): 00:34:17.733 | 1.00th=[ 5866], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5932], 00:34:17.733 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:34:17.733 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 5997], 95.00th=[ 5997], 00:34:17.733 | 99.00th=[ 6128], 99.50th=[ 6194], 99.90th=[ 8291], 99.95th=[ 9765], 00:34:17.733 | 99.99th=[ 9896] 00:34:17.733 bw ( KiB/s): min=40848, max=43144, per=99.93%, avg=42424.00, stdev=1062.79, samples=4 00:34:17.733 iops : min=10212, max=10786, avg=10606.00, stdev=265.70, samples=4 00:34:17.733 lat (msec) : 4=0.04%, 10=99.95%, 20=0.01% 00:34:17.733 cpu : usr=99.55%, sys=0.10%, ctx=16, majf=0, minf=4 00:34:17.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:17.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:17.733 issued rwts: total=21283,21279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:17.733 00:34:17.733 Run status group 0 (all jobs): 00:34:17.733 READ: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=83.1MiB (87.2MB), run=2005-2005msec 00:34:17.733 WRITE: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=83.1MiB (87.2MB), run=2005-2005msec 00:34:17.733 14:05:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:17.733 14:05:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:17.733 14:05:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:34:32.608 14:05:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:32.608 14:05:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:44.829 14:05:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:44.829 14:05:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.030 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:49.030 rmmod nvme_rdma 00:34:49.031 rmmod nvme_fabrics 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1898243 ']' 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1898243 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1898243 ']' 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1898243 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1898243 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1898243' 00:34:49.031 killing process with pid 1898243 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1898243 00:34:49.031 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1898243 00:34:49.290 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:49.290 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:49.290 00:34:49.290 real 1m10.716s 00:34:49.290 user 5m2.951s 00:34:49.290 sys 0m6.748s 00:34:49.290 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.290 14:05:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.290 ************************************ 00:34:49.290 END TEST nvmf_fio_host 00:34:49.290 ************************************ 00:34:49.290 14:05:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:34:49.290 14:05:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:49.290 14:05:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.290 14:05:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.290 ************************************ 00:34:49.290 START TEST nvmf_failover 00:34:49.290 ************************************ 00:34:49.290 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:34:49.290 * Looking for test storage... 00:34:49.290 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:49.290 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:49.290 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:34:49.290 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:49.550 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:49.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.551 --rc genhtml_branch_coverage=1 00:34:49.551 --rc genhtml_function_coverage=1 00:34:49.551 --rc genhtml_legend=1 00:34:49.551 --rc geninfo_all_blocks=1 00:34:49.551 --rc geninfo_unexecuted_blocks=1 00:34:49.551 00:34:49.551 ' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:49.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.551 --rc genhtml_branch_coverage=1 00:34:49.551 --rc genhtml_function_coverage=1 00:34:49.551 --rc genhtml_legend=1 00:34:49.551 --rc geninfo_all_blocks=1 00:34:49.551 --rc geninfo_unexecuted_blocks=1 00:34:49.551 00:34:49.551 ' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:49.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.551 --rc genhtml_branch_coverage=1 00:34:49.551 --rc genhtml_function_coverage=1 00:34:49.551 --rc genhtml_legend=1 00:34:49.551 --rc geninfo_all_blocks=1 00:34:49.551 --rc geninfo_unexecuted_blocks=1 00:34:49.551 00:34:49.551 ' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:49.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.551 --rc genhtml_branch_coverage=1 00:34:49.551 --rc genhtml_function_coverage=1 00:34:49.551 --rc genhtml_legend=1 00:34:49.551 --rc geninfo_all_blocks=1 00:34:49.551 --rc geninfo_unexecuted_blocks=1 00:34:49.551 00:34:49.551 ' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:49.551 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:49.551 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:34:49.552 14:05:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:34:56.120 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:34:56.120 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:56.120 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:34:56.121 Found net devices under 0000:18:00.0: mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:34:56.121 Found net devices under 0000:18:00.1: mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:56.121 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:56.121 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:34:56.121 altname enp24s0f0np0 00:34:56.121 altname ens785f0np0 00:34:56.121 inet 192.168.100.8/24 scope global mlx_0_0 00:34:56.121 valid_lft forever preferred_lft forever 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:56.121 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:56.121 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:34:56.121 altname enp24s0f1np1 00:34:56.121 altname ens785f1np1 00:34:56.121 inet 192.168.100.9/24 scope global mlx_0_1 00:34:56.121 valid_lft forever preferred_lft forever 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:56.121 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:56.122 192.168.100.9' 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:56.122 192.168.100.9' 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:56.122 192.168.100.9' 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1913207 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1913207 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1913207 ']' 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:56.122 [2024-12-05 14:05:55.316281] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:34:56.122 [2024-12-05 14:05:55.316328] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.122 [2024-12-05 14:05:55.389801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:56.122 [2024-12-05 14:05:55.411498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.122 [2024-12-05 14:05:55.411536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.122 [2024-12-05 14:05:55.411543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.122 [2024-12-05 14:05:55.411548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.122 [2024-12-05 14:05:55.411552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.122 [2024-12-05 14:05:55.412874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.122 [2024-12-05 14:05:55.412964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.122 [2024-12-05 14:05:55.412964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:56.122 [2024-12-05 14:05:55.726586] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d8d6c0/0x1d91bb0) succeed. 00:34:56.122 [2024-12-05 14:05:55.734716] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d8ecb0/0x1dd3250) succeed. 00:34:56.122 14:05:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:56.379 Malloc0 00:34:56.379 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:56.636 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.636 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:56.894 [2024-12-05 14:05:56.567195] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:56.894 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:56.894 [2024-12-05 14:05:56.739520] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:57.152 [2024-12-05 14:05:56.916107] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1913505 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1913505 /var/tmp/bdevperf.sock 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1913505 ']' 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:57.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:57.152 14:05:56 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:57.411 14:05:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.411 14:05:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:57.411 14:05:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:57.670 NVMe0n1 00:34:57.670 14:05:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:57.929 00:34:57.929 14:05:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1913757 00:34:57.929 14:05:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:57.929 14:05:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:58.866 14:05:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:59.125 14:05:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:35:02.411 14:06:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:02.411 00:35:02.411 14:06:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:35:02.670 14:06:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:35:05.957 14:06:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:05.957 [2024-12-05 14:06:05.492889] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:05.957 14:06:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:35:06.892 14:06:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:35:06.892 14:06:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1913757 00:35:13.465 { 00:35:13.465 "results": [ 00:35:13.465 { 00:35:13.465 "job": "NVMe0n1", 00:35:13.465 "core_mask": "0x1", 00:35:13.465 "workload": "verify", 00:35:13.465 "status": "finished", 00:35:13.465 "verify_range": { 00:35:13.465 "start": 0, 00:35:13.465 "length": 16384 00:35:13.465 }, 00:35:13.465 "queue_depth": 128, 00:35:13.465 "io_size": 4096, 00:35:13.465 "runtime": 15.005373, 00:35:13.465 "iops": 15054.60743961513, 00:35:13.465 "mibps": 58.807060310996604, 00:35:13.465 "io_failed": 5332, 00:35:13.465 "io_timeout": 0, 00:35:13.465 "avg_latency_us": 8285.629009646236, 00:35:13.465 "min_latency_us": 312.5096296296296, 00:35:13.465 "max_latency_us": 1043915.6622222222 00:35:13.465 } 00:35:13.465 ], 00:35:13.465 "core_count": 1 00:35:13.465 } 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1913505 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1913505 ']' 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1913505 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913505 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913505' 00:35:13.466 killing process with pid 1913505 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1913505 00:35:13.466 14:06:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1913505 00:35:13.466 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:13.466 [2024-12-05 14:05:56.985594] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:35:13.466 [2024-12-05 14:05:56.985653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913505 ] 00:35:13.466 [2024-12-05 14:05:57.057288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.466 [2024-12-05 14:05:57.078666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.466 Running I/O for 15 seconds... 00:35:13.466 18944.00 IOPS, 74.00 MiB/s [2024-12-05T13:06:13.319Z] 10368.00 IOPS, 40.50 MiB/s [2024-12-05T13:06:13.319Z] [2024-12-05 14:05:59.863208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.466 [2024-12-05 14:05:59.863242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.863251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.466 [2024-12-05 14:05:59.863258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.863265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.466 [2024-12-05 14:05:59.863271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.863277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.466 [2024-12-05 14:05:59.863283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.864944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:35:13.466 [2024-12-05 14:05:59.864958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:35:13.466 [2024-12-05 14:05:59.864968] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:35:13.466 [2024-12-05 14:05:59.864975] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:35:13.466 [2024-12-05 14:05:59.864990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.864997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.466 [2024-12-05 14:05:59.865814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.466 [2024-12-05 14:05:59.865840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.865848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.865874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.865880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.865906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.865913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.865938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.865945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.865970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.865979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.866993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.866999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.867025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.867031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.867057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.867064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.467 [2024-12-05 14:05:59.867090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.467 [2024-12-05 14:05:59.867096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.468 [2024-12-05 14:05:59.867688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.867987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.867998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.468 [2024-12-05 14:05:59.868358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x182900 00:35:13.468 [2024-12-05 14:05:59.868365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.868972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.868979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.869005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.869012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.869038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.869045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.869072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.869079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.869106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.869113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.869139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.869148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.869175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.869182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.869209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182900 00:35:13.469 [2024-12-05 14:05:59.869216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.883268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:13.469 [2024-12-05 14:05:59.883285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:13.469 [2024-12-05 14:05:59.883292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35176 len:8 PRP1 0x0 PRP2 0x0 00:35:13.469 [2024-12-05 14:05:59.883299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:05:59.883362] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:35:13.469 [2024-12-05 14:05:59.883395] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:35:13.469 [2024-12-05 14:05:59.885952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:13.469 [2024-12-05 14:05:59.926330] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:35:13.469 12176.00 IOPS, 47.56 MiB/s [2024-12-05T13:06:13.322Z] 13894.50 IOPS, 54.28 MiB/s [2024-12-05T13:06:13.322Z] 13108.80 IOPS, 51.21 MiB/s [2024-12-05T13:06:13.322Z] [2024-12-05 14:06:03.314372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.469 [2024-12-05 14:06:03.314412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:06:03.314427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.469 [2024-12-05 14:06:03.314434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:06:03.314442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.469 [2024-12-05 14:06:03.314449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:06:03.314457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.469 [2024-12-05 14:06:03.314463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:06:03.314470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183700 00:35:13.469 [2024-12-05 14:06:03.314476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:06:03.314484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183700 00:35:13.469 [2024-12-05 14:06:03.314490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:06:03.314502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183700 00:35:13.469 [2024-12-05 14:06:03.314508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.469 [2024-12-05 14:06:03.314515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.470 [2024-12-05 14:06:03.314807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.470 [2024-12-05 14:06:03.314820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.470 [2024-12-05 14:06:03.314834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.470 [2024-12-05 14:06:03.314849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.470 [2024-12-05 14:06:03.314861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.470 [2024-12-05 14:06:03.314875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.470 [2024-12-05 14:06:03.314889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.470 [2024-12-05 14:06:03.314935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183700 00:35:13.470 [2024-12-05 14:06:03.314941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.314948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.314954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.314961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.314969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.314976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.314982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.314989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.314995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183700 00:35:13.471 [2024-12-05 14:06:03.315441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.471 [2024-12-05 14:06:03.315449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.471 [2024-12-05 14:06:03.315455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.472 [2024-12-05 14:06:03.315868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183700 00:35:13.472 [2024-12-05 14:06:03.315949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.472 [2024-12-05 14:06:03.315957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183700 00:35:13.473 [2024-12-05 14:06:03.315963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.315970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183700 00:35:13.473 [2024-12-05 14:06:03.315976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.315983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.473 [2024-12-05 14:06:03.315989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.315997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.473 [2024-12-05 14:06:03.316002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.473 [2024-12-05 14:06:03.316016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.473 [2024-12-05 14:06:03.316029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.473 [2024-12-05 14:06:03.316042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.473 [2024-12-05 14:06:03.316055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.473 [2024-12-05 14:06:03.316069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.473 [2024-12-05 14:06:03.316086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183700 00:35:13.473 [2024-12-05 14:06:03.316099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183700 00:35:13.473 [2024-12-05 14:06:03.316112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.316120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183700 00:35:13.473 [2024-12-05 14:06:03.316126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.326971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:13.473 [2024-12-05 14:06:03.326984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:13.473 [2024-12-05 14:06:03.326991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15496 len:8 PRP1 0x0 PRP2 0x0 00:35:13.473 [2024-12-05 14:06:03.326997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.327038] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:35:13.473 [2024-12-05 14:06:03.327048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:35:13.473 [2024-12-05 14:06:03.327078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.473 [2024-12-05 14:06:03.327086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:80dca0 sqhd:7870 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.327093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.473 [2024-12-05 14:06:03.327099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:80dca0 sqhd:7870 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.327106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.473 [2024-12-05 14:06:03.327112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:80dca0 sqhd:7870 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.327119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:13.473 [2024-12-05 14:06:03.327125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:80dca0 sqhd:7870 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:03.343714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:35:13.473 [2024-12-05 14:06:03.343733] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] already in failed state 00:35:13.473 [2024-12-05 14:06:03.343742] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Unable to perform failover, already in progress. 00:35:13.473 [2024-12-05 14:06:03.346329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:35:13.473 [2024-12-05 14:06:03.386505] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:35:13.473 12150.67 IOPS, 47.46 MiB/s [2024-12-05T13:06:13.326Z] 13165.14 IOPS, 51.43 MiB/s [2024-12-05T13:06:13.326Z] 13929.12 IOPS, 54.41 MiB/s [2024-12-05T13:06:13.326Z] 14312.44 IOPS, 55.91 MiB/s [2024-12-05T13:06:13.326Z] [2024-12-05 14:06:07.687162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182900 00:35:13.473 [2024-12-05 14:06:07.687404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.473 [2024-12-05 14:06:07.687411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.474 [2024-12-05 14:06:07.687791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x182900 00:35:13.474 [2024-12-05 14:06:07.687844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.474 [2024-12-05 14:06:07.687851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.687991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.687998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.475 [2024-12-05 14:06:07.688333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.475 [2024-12-05 14:06:07.688340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x182900 00:35:13.475 [2024-12-05 14:06:07.688346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.476 [2024-12-05 14:06:07.688768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.476 [2024-12-05 14:06:07.688853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x182900 00:35:13.476 [2024-12-05 14:06:07.688859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.477 [2024-12-05 14:06:07.688867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182900 00:35:13.477 [2024-12-05 14:06:07.688873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.477 [2024-12-05 14:06:07.688880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.477 [2024-12-05 14:06:07.688885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.477 [2024-12-05 14:06:07.688892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.477 [2024-12-05 14:06:07.688898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f7e83000 sqhd:7210 p:0 m:0 dnr:0 00:35:13.477 [2024-12-05 14:06:07.690586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:13.477 [2024-12-05 14:06:07.690596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:13.477 [2024-12-05 14:06:07.690602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:8 PRP1 0x0 PRP2 0x0 00:35:13.477 [2024-12-05 14:06:07.690611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.477 [2024-12-05 14:06:07.690649] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:35:13.477 [2024-12-05 14:06:07.690658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:35:13.477 [2024-12-05 14:06:07.693266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:35:13.477 [2024-12-05 14:06:07.706905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:35:13.477 [2024-12-05 14:06:07.748498] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:35:13.477 12939.90 IOPS, 50.55 MiB/s [2024-12-05T13:06:13.330Z] 13519.09 IOPS, 52.81 MiB/s [2024-12-05T13:06:13.330Z] 14000.83 IOPS, 54.69 MiB/s [2024-12-05T13:06:13.330Z] 14405.38 IOPS, 56.27 MiB/s [2024-12-05T13:06:13.330Z] 14754.79 IOPS, 57.64 MiB/s [2024-12-05T13:06:13.330Z] 15053.47 IOPS, 58.80 MiB/s 00:35:13.477 Latency(us) 00:35:13.477 [2024-12-05T13:06:13.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.477 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:13.477 Verification LBA range: start 0x0 length 0x4000 00:35:13.477 NVMe0n1 : 15.01 15054.61 58.81 355.34 0.00 8285.63 312.51 1043915.66 00:35:13.477 [2024-12-05T13:06:13.330Z] =================================================================================================================== 00:35:13.477 [2024-12-05T13:06:13.330Z] Total : 15054.61 58.81 355.34 0.00 8285.63 312.51 1043915.66 00:35:13.477 Received shutdown signal, test time was about 15.000000 seconds 00:35:13.477 00:35:13.477 Latency(us) 00:35:13.477 [2024-12-05T13:06:13.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.477 [2024-12-05T13:06:13.330Z] =================================================================================================================== 00:35:13.477 [2024-12-05T13:06:13.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1917011 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1917011 /var/tmp/bdevperf.sock 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1917011 ']' 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:13.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:35:13.477 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:35:13.736 [2024-12-05 14:06:13.436013] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:35:13.736 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:35:13.996 [2024-12-05 14:06:13.624648] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:35:13.996 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:14.255 NVMe0n1 00:35:14.255 14:06:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:14.513 00:35:14.513 14:06:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:35:14.771 00:35:14.771 14:06:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:14.771 14:06:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:35:14.771 14:06:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:15.030 14:06:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:35:18.493 14:06:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:18.493 14:06:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:35:18.493 14:06:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:18.493 14:06:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1917810 00:35:18.493 14:06:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1917810 00:35:19.430 { 00:35:19.430 "results": [ 00:35:19.430 { 00:35:19.430 "job": "NVMe0n1", 00:35:19.430 "core_mask": "0x1", 00:35:19.430 "workload": "verify", 00:35:19.430 "status": "finished", 00:35:19.430 "verify_range": { 00:35:19.430 "start": 0, 00:35:19.430 "length": 16384 00:35:19.430 }, 00:35:19.430 "queue_depth": 128, 00:35:19.430 "io_size": 4096, 00:35:19.430 "runtime": 1.005975, 00:35:19.430 "iops": 18958.721638211686, 00:35:19.430 "mibps": 74.0575063992644, 00:35:19.430 "io_failed": 0, 00:35:19.430 "io_timeout": 0, 00:35:19.430 "avg_latency_us": 6717.90357444693, 00:35:19.430 "min_latency_us": 2415.122962962963, 00:35:19.430 "max_latency_us": 16019.91111111111 00:35:19.430 } 00:35:19.430 ], 00:35:19.430 "core_count": 1 00:35:19.430 } 00:35:19.430 14:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:19.430 [2024-12-05 14:06:13.096568] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:35:19.430 [2024-12-05 14:06:13.096614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917011 ] 00:35:19.430 [2024-12-05 14:06:13.170571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.430 [2024-12-05 14:06:13.189199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.430 [2024-12-05 14:06:14.722815] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:35:19.430 [2024-12-05 14:06:14.723387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:35:19.430 [2024-12-05 14:06:14.723423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:35:19.430 [2024-12-05 14:06:14.739263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:35:19.430 [2024-12-05 14:06:14.755373] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:35:19.430 Running I/O for 1 seconds... 00:35:19.430 18944.00 IOPS, 74.00 MiB/s 00:35:19.430 Latency(us) 00:35:19.430 [2024-12-05T13:06:19.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.430 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:19.430 Verification LBA range: start 0x0 length 0x4000 00:35:19.430 NVMe0n1 : 1.01 18958.72 74.06 0.00 0.00 6717.90 2415.12 16019.91 00:35:19.430 [2024-12-05T13:06:19.283Z] =================================================================================================================== 00:35:19.430 [2024-12-05T13:06:19.283Z] Total : 18958.72 74.06 0.00 0.00 6717.90 2415.12 16019.91 00:35:19.430 14:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:19.430 14:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:35:19.430 14:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:19.690 14:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:19.690 14:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:35:19.949 14:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:20.208 14:06:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:35:23.494 14:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:23.494 14:06:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1917011 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1917011 ']' 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1917011 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917011 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917011' 00:35:23.494 killing process with pid 1917011 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1917011 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1917011 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:35:23.494 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.754 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:35:23.754 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:23.754 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:35:23.754 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:23.754 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:35:23.754 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:23.755 rmmod nvme_rdma 00:35:23.755 rmmod nvme_fabrics 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1913207 ']' 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1913207 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1913207 ']' 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1913207 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913207 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913207' 00:35:23.755 killing process with pid 1913207 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1913207 00:35:23.755 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1913207 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:24.015 00:35:24.015 real 0m34.719s 00:35:24.015 user 1m56.824s 00:35:24.015 sys 0m6.324s 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:24.015 ************************************ 00:35:24.015 END TEST nvmf_failover 00:35:24.015 ************************************ 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.015 ************************************ 00:35:24.015 START TEST nvmf_host_discovery 00:35:24.015 ************************************ 00:35:24.015 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:35:24.275 * Looking for test storage... 00:35:24.275 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.275 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:24.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.276 --rc genhtml_branch_coverage=1 00:35:24.276 --rc genhtml_function_coverage=1 00:35:24.276 --rc genhtml_legend=1 00:35:24.276 --rc geninfo_all_blocks=1 00:35:24.276 --rc geninfo_unexecuted_blocks=1 00:35:24.276 00:35:24.276 ' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:24.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.276 --rc genhtml_branch_coverage=1 00:35:24.276 --rc genhtml_function_coverage=1 00:35:24.276 --rc genhtml_legend=1 00:35:24.276 --rc geninfo_all_blocks=1 00:35:24.276 --rc geninfo_unexecuted_blocks=1 00:35:24.276 00:35:24.276 ' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:24.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.276 --rc genhtml_branch_coverage=1 00:35:24.276 --rc genhtml_function_coverage=1 00:35:24.276 --rc genhtml_legend=1 00:35:24.276 --rc geninfo_all_blocks=1 00:35:24.276 --rc geninfo_unexecuted_blocks=1 00:35:24.276 00:35:24.276 ' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:24.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.276 --rc genhtml_branch_coverage=1 00:35:24.276 --rc genhtml_function_coverage=1 00:35:24.276 --rc genhtml_legend=1 00:35:24.276 --rc geninfo_all_blocks=1 00:35:24.276 --rc geninfo_unexecuted_blocks=1 00:35:24.276 00:35:24.276 ' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:24.276 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:35:24.276 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:35:24.276 00:35:24.276 real 0m0.186s 00:35:24.276 user 0m0.113s 00:35:24.276 sys 0m0.087s 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.276 14:06:23 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.276 ************************************ 00:35:24.276 END TEST nvmf_host_discovery 00:35:24.276 ************************************ 00:35:24.276 14:06:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:35:24.276 14:06:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:24.276 14:06:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:24.276 14:06:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.277 ************************************ 00:35:24.277 START TEST nvmf_host_multipath_status 00:35:24.277 ************************************ 00:35:24.277 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:35:24.536 * Looking for test storage... 00:35:24.537 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.537 --rc genhtml_branch_coverage=1 00:35:24.537 --rc genhtml_function_coverage=1 00:35:24.537 --rc genhtml_legend=1 00:35:24.537 --rc geninfo_all_blocks=1 00:35:24.537 --rc geninfo_unexecuted_blocks=1 00:35:24.537 00:35:24.537 ' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.537 --rc genhtml_branch_coverage=1 00:35:24.537 --rc genhtml_function_coverage=1 00:35:24.537 --rc genhtml_legend=1 00:35:24.537 --rc geninfo_all_blocks=1 00:35:24.537 --rc geninfo_unexecuted_blocks=1 00:35:24.537 00:35:24.537 ' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.537 --rc genhtml_branch_coverage=1 00:35:24.537 --rc genhtml_function_coverage=1 00:35:24.537 --rc genhtml_legend=1 00:35:24.537 --rc geninfo_all_blocks=1 00:35:24.537 --rc geninfo_unexecuted_blocks=1 00:35:24.537 00:35:24.537 ' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:24.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.537 --rc genhtml_branch_coverage=1 00:35:24.537 --rc genhtml_function_coverage=1 00:35:24.537 --rc genhtml_legend=1 00:35:24.537 --rc geninfo_all_blocks=1 00:35:24.537 --rc geninfo_unexecuted_blocks=1 00:35:24.537 00:35:24.537 ' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:24.537 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.537 14:06:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:35:31.107 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:35:31.107 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:35:31.107 Found net devices under 0000:18:00.0: mlx_0_0 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.107 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:35:31.108 Found net devices under 0000:18:00.1: mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:35:31.108 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:31.108 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:35:31.108 altname enp24s0f0np0 00:35:31.108 altname ens785f0np0 00:35:31.108 inet 192.168.100.8/24 scope global mlx_0_0 00:35:31.108 valid_lft forever preferred_lft forever 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:35:31.108 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:31.108 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:35:31.108 altname enp24s0f1np1 00:35:31.108 altname ens785f1np1 00:35:31.108 inet 192.168.100.9/24 scope global mlx_0_1 00:35:31.108 valid_lft forever preferred_lft forever 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:35:31.108 192.168.100.9' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:35:31.108 192.168.100.9' 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:35:31.108 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:35:31.109 192.168.100.9' 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1922179 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1922179 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1922179 ']' 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:31.109 [2024-12-05 14:06:30.320030] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:35:31.109 [2024-12-05 14:06:30.320074] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.109 [2024-12-05 14:06:30.396110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:31.109 [2024-12-05 14:06:30.416669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.109 [2024-12-05 14:06:30.416704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.109 [2024-12-05 14:06:30.416710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.109 [2024-12-05 14:06:30.416715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.109 [2024-12-05 14:06:30.416720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.109 [2024-12-05 14:06:30.417803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.109 [2024-12-05 14:06:30.417802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1922179 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:35:31.109 [2024-12-05 14:06:30.718122] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa0b860/0xa0fd50) succeed. 00:35:31.109 [2024-12-05 14:06:30.726057] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa0cdb0/0xa513f0) succeed. 00:35:31.109 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:31.368 Malloc0 00:35:31.368 14:06:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:31.368 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:31.627 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:31.886 [2024-12-05 14:06:31.516603] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:31.886 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:35:31.886 [2024-12-05 14:06:31.728940] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1922468 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1922468 /var/tmp/bdevperf.sock 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1922468 ']' 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:32.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:35:32.145 14:06:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:32.405 14:06:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:32.664 Nvme0n1 00:35:32.664 14:06:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:32.923 Nvme0n1 00:35:32.923 14:06:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:32.923 14:06:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:35.457 14:06:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:35.457 14:06:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:35:35.457 14:06:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:35.457 14:06:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:36.392 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:36.392 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:36.392 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.392 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:36.650 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.650 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:36.650 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.650 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:36.650 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:36.650 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:36.650 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.650 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:36.930 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.930 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:36.930 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.930 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:37.188 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.188 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:37.188 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.188 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:37.188 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.188 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:37.188 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.188 14:06:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:37.445 14:06:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.445 14:06:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:37.445 14:06:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:37.703 14:06:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:37.703 14:06:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.076 14:06:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:39.334 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:39.334 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:39.334 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.334 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:39.592 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:39.592 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:39.592 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.592 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:39.851 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:39.851 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:39.851 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:39.851 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:39.851 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:39.851 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:39.851 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:40.109 14:06:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:35:40.368 14:06:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:41.305 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:41.305 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:41.305 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:41.305 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:41.563 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:41.563 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:41.563 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:41.563 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:41.563 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:41.563 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:41.563 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:41.563 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:41.820 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:41.820 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:41.820 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:41.820 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:42.078 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.078 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:42.078 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.078 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:42.078 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.078 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:42.078 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:42.078 14:06:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:42.336 14:06:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:42.336 14:06:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:42.336 14:06:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:42.594 14:06:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:35:42.853 14:06:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:43.791 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:43.791 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:43.791 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:43.791 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:44.050 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:44.050 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:44.050 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.050 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:44.050 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:44.050 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:44.050 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.050 14:06:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:44.310 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:44.310 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:44.310 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.310 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:44.569 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:44.569 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:44.569 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.569 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:44.569 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:44.569 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:44.569 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:44.569 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:44.828 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:44.828 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:44.828 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:35:45.088 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:35:45.347 14:06:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:46.282 14:06:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:46.282 14:06:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:46.282 14:06:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.282 14:06:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:46.541 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:46.541 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:46.541 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.541 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:46.541 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:46.541 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:46.541 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.541 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:46.799 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:46.799 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:46.799 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.799 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:47.058 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:47.058 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:47.058 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.058 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:47.058 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:47.058 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:47.058 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.058 14:06:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:47.316 14:06:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:47.316 14:06:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:47.316 14:06:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:35:47.575 14:06:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:47.575 14:06:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.951 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:49.209 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.209 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:49.209 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:49.209 14:06:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.468 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.468 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:49.468 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:49.469 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.727 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:49.727 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:49.727 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:49.727 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:49.727 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.727 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:49.985 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:49.985 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:35:50.244 14:06:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:50.244 14:06:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:51.621 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:51.880 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.880 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.880 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:51.880 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.880 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:52.139 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.139 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:52.139 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:52.139 14:06:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.398 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.398 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:52.398 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.398 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:52.398 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:52.398 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:52.398 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:52.657 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:52.916 14:06:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:53.859 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:53.859 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:53.859 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.859 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:54.118 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:54.118 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:54.118 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:54.118 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.377 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.377 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:54.377 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.377 14:06:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:54.377 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.377 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:54.377 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.377 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:54.637 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.637 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:54.637 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.637 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:54.896 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.896 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:54.897 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.897 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:54.897 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.897 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:54.897 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:55.156 14:06:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:35:55.421 14:06:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:56.357 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:56.357 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:56.357 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.357 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:56.615 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:56.615 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:56.615 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.615 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:56.875 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:56.875 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:56.875 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:56.875 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.875 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:56.875 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:56.875 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:56.875 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.134 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.134 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:57.134 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.134 14:06:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:57.393 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.393 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:57.393 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:57.393 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:57.393 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:57.393 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:57.393 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:57.652 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:35:57.910 14:06:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:58.847 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:58.847 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:58.847 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.847 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:59.106 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.106 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:59.106 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.106 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:59.365 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:59.365 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:59.365 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.365 14:06:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:59.365 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.365 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:59.365 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.365 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:59.624 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.624 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:59.624 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.624 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1922468 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1922468 ']' 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1922468 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.883 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922468 00:36:00.151 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:36:00.151 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:36:00.151 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922468' 00:36:00.151 killing process with pid 1922468 00:36:00.151 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1922468 00:36:00.151 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1922468 00:36:00.151 { 00:36:00.151 "results": [ 00:36:00.151 { 00:36:00.151 "job": "Nvme0n1", 00:36:00.151 "core_mask": "0x4", 00:36:00.151 "workload": "verify", 00:36:00.151 "status": "terminated", 00:36:00.151 "verify_range": { 00:36:00.151 "start": 0, 00:36:00.151 "length": 16384 00:36:00.151 }, 00:36:00.151 "queue_depth": 128, 00:36:00.151 "io_size": 4096, 00:36:00.151 "runtime": 26.931609, 00:36:00.151 "iops": 16723.43453374806, 00:36:00.151 "mibps": 65.32591614745336, 00:36:00.151 "io_failed": 0, 00:36:00.151 "io_timeout": 0, 00:36:00.151 "avg_latency_us": 7633.887278195647, 00:36:00.151 "min_latency_us": 43.23555555555556, 00:36:00.151 "max_latency_us": 3019898.88 00:36:00.151 } 00:36:00.151 ], 00:36:00.151 "core_count": 1 00:36:00.151 } 00:36:00.151 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1922468 00:36:00.151 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:00.151 [2024-12-05 14:06:31.800910] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:36:00.151 [2024-12-05 14:06:31.800957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922468 ] 00:36:00.151 [2024-12-05 14:06:31.876610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.151 [2024-12-05 14:06:31.897576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:00.151 Running I/O for 90 seconds... 00:36:00.151 19712.00 IOPS, 77.00 MiB/s [2024-12-05T13:07:00.004Z] 19776.00 IOPS, 77.25 MiB/s [2024-12-05T13:07:00.004Z] 19786.67 IOPS, 77.29 MiB/s [2024-12-05T13:07:00.004Z] 19768.00 IOPS, 77.22 MiB/s [2024-12-05T13:07:00.004Z] 19751.60 IOPS, 77.15 MiB/s [2024-12-05T13:07:00.004Z] 19786.67 IOPS, 77.29 MiB/s [2024-12-05T13:07:00.004Z] 19776.00 IOPS, 77.25 MiB/s [2024-12-05T13:07:00.004Z] 19738.50 IOPS, 77.10 MiB/s [2024-12-05T13:07:00.004Z] 19719.11 IOPS, 77.03 MiB/s [2024-12-05T13:07:00.004Z] 19705.60 IOPS, 76.97 MiB/s [2024-12-05T13:07:00.004Z] 19694.09 IOPS, 76.93 MiB/s [2024-12-05T13:07:00.004Z] [2024-12-05 14:06:44.730122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183400 00:36:00.151 [2024-12-05 14:06:44.730156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183400 00:36:00.151 [2024-12-05 14:06:44.730358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.151 [2024-12-05 14:06:44.730487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.151 [2024-12-05 14:06:44.730495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.152 [2024-12-05 14:06:44.730851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.730991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183400 00:36:00.152 [2024-12-05 14:06:44.730997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.152 [2024-12-05 14:06:44.731007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.153 [2024-12-05 14:06:44.731380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.153 [2024-12-05 14:06:44.731396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.153 [2024-12-05 14:06:44.731416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.153 [2024-12-05 14:06:44.731432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.153 [2024-12-05 14:06:44.731448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.153 [2024-12-05 14:06:44.731491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183400 00:36:00.153 [2024-12-05 14:06:44.731497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183400 00:36:00.154 [2024-12-05 14:06:44.731967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.154 [2024-12-05 14:06:44.731977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.731987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.731997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.732321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.732329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.741586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.741594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.741605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.741612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:44.741622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x183400 00:36:00.155 [2024-12-05 14:06:44.741629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.155 19568.00 IOPS, 76.44 MiB/s [2024-12-05T13:07:00.008Z] 18062.77 IOPS, 70.56 MiB/s [2024-12-05T13:07:00.008Z] 16772.57 IOPS, 65.52 MiB/s [2024-12-05T13:07:00.008Z] 15735.47 IOPS, 61.47 MiB/s [2024-12-05T13:07:00.008Z] 15980.75 IOPS, 62.42 MiB/s [2024-12-05T13:07:00.008Z] 16199.94 IOPS, 63.28 MiB/s [2024-12-05T13:07:00.008Z] 16191.67 IOPS, 63.25 MiB/s [2024-12-05T13:07:00.008Z] 16167.00 IOPS, 63.15 MiB/s [2024-12-05T13:07:00.008Z] 16226.40 IOPS, 63.38 MiB/s [2024-12-05T13:07:00.008Z] 16393.52 IOPS, 64.04 MiB/s [2024-12-05T13:07:00.008Z] 16542.32 IOPS, 64.62 MiB/s [2024-12-05T13:07:00.008Z] 16549.61 IOPS, 64.65 MiB/s [2024-12-05T13:07:00.008Z] 16508.17 IOPS, 64.49 MiB/s [2024-12-05T13:07:00.008Z] [2024-12-05 14:06:57.556821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.155 [2024-12-05 14:06:57.556854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:57.556870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.155 [2024-12-05 14:06:57.556877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.155 [2024-12-05 14:06:57.556886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.556893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.556902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.556908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.156 [2024-12-05 14:06:57.557754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183400 00:36:00.156 [2024-12-05 14:06:57.557768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.156 [2024-12-05 14:06:57.557776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.557782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.557790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.557797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.557806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.557812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.557821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.557827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.557836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.557842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.557850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.557856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.557865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.557870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.557879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.557885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.157 [2024-12-05 14:06:57.558452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.157 [2024-12-05 14:06:57.558461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183400 00:36:00.157 [2024-12-05 14:06:57.558467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.558476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.558482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.558490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.558496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.558505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.558511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.558520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.558526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.558535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.558541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.558551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.558557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.558565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.558572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.559919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.559935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.560450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.560478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.560553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.560582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.560597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.560640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.560654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183400 00:36:00.158 [2024-12-05 14:06:57.560713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.158 [2024-12-05 14:06:57.560756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.158 [2024-12-05 14:06:57.560765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.560771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.560779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.560785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.560793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.560799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.560808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.560814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.560822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.560828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.560836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.560842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.560850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.560855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.560863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.560870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.159 [2024-12-05 14:06:57.561467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:00.159 [2024-12-05 14:06:57.561477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183400 00:36:00.159 [2024-12-05 14:06:57.561483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.561492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.561498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.561506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.561512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.561521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.561527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.561536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.561542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.561550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.561556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.561564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.561571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.561580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.561586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.562881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.562895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.562907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.562914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.562922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.562928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.563432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.563463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.563492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.563536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.563581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183400 00:36:00.160 [2024-12-05 14:06:57.563596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:00.160 [2024-12-05 14:06:57.563646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.160 [2024-12-05 14:06:57.563653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.563667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.563682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.563697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.563711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.563725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.563740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.563754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.563769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.563783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.563797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.563812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.563826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.563834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.563840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.564054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.564069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.564159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.161 [2024-12-05 14:06:57.564173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.161 [2024-12-05 14:06:57.564196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183400 00:36:00.161 [2024-12-05 14:06:57.564203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.564211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.564217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.564225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.564231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.573090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.573110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.573130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.573149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.573268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.573328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.573348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.573388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.573396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.575236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.162 [2024-12-05 14:06:57.575255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.575277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.575287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.575299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183400 00:36:00.162 [2024-12-05 14:06:57.575307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:00.162 [2024-12-05 14:06:57.575319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.575327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.575391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.575824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.575863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.575883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.575903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.575942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.575980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.575992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.576002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.576023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.576042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.576062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.576082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.576101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.576121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.576140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.163 [2024-12-05 14:06:57.576160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.576180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183400 00:36:00.163 [2024-12-05 14:06:57.576199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:00.163 [2024-12-05 14:06:57.576211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183400 00:36:00.164 [2024-12-05 14:06:57.576543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:00.164 [2024-12-05 14:06:57.576554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.164 [2024-12-05 14:06:57.576562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:00.164 16504.88 IOPS, 64.47 MiB/s [2024-12-05T13:07:00.017Z] 16625.73 IOPS, 64.94 MiB/s [2024-12-05T13:07:00.017Z] Received shutdown signal, test time was about 26.932208 seconds 00:36:00.164 00:36:00.164 Latency(us) 00:36:00.164 [2024-12-05T13:07:00.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.164 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:00.164 Verification LBA range: start 0x0 length 0x4000 00:36:00.164 Nvme0n1 : 26.93 16723.43 65.33 0.00 0.00 7633.89 43.24 3019898.88 00:36:00.164 [2024-12-05T13:07:00.017Z] =================================================================================================================== 00:36:00.164 [2024-12-05T13:07:00.017Z] Total : 16723.43 65.33 0.00 0.00 7633.89 43.24 3019898.88 00:36:00.164 14:06:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:36:00.515 rmmod nvme_rdma 00:36:00.515 rmmod nvme_fabrics 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1922179 ']' 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1922179 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1922179 ']' 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1922179 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922179 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922179' 00:36:00.515 killing process with pid 1922179 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1922179 00:36:00.515 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1922179 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:36:00.804 00:36:00.804 real 0m36.404s 00:36:00.804 user 1m45.081s 00:36:00.804 sys 0m7.855s 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:00.804 ************************************ 00:36:00.804 END TEST nvmf_host_multipath_status 00:36:00.804 ************************************ 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.804 ************************************ 00:36:00.804 START TEST nvmf_discovery_remove_ifc 00:36:00.804 ************************************ 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:36:00.804 * Looking for test storage... 00:36:00.804 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:36:00.804 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:01.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.064 --rc genhtml_branch_coverage=1 00:36:01.064 --rc genhtml_function_coverage=1 00:36:01.064 --rc genhtml_legend=1 00:36:01.064 --rc geninfo_all_blocks=1 00:36:01.064 --rc geninfo_unexecuted_blocks=1 00:36:01.064 00:36:01.064 ' 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:01.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.064 --rc genhtml_branch_coverage=1 00:36:01.064 --rc genhtml_function_coverage=1 00:36:01.064 --rc genhtml_legend=1 00:36:01.064 --rc geninfo_all_blocks=1 00:36:01.064 --rc geninfo_unexecuted_blocks=1 00:36:01.064 00:36:01.064 ' 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:01.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.064 --rc genhtml_branch_coverage=1 00:36:01.064 --rc genhtml_function_coverage=1 00:36:01.064 --rc genhtml_legend=1 00:36:01.064 --rc geninfo_all_blocks=1 00:36:01.064 --rc geninfo_unexecuted_blocks=1 00:36:01.064 00:36:01.064 ' 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:01.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.064 --rc genhtml_branch_coverage=1 00:36:01.064 --rc genhtml_function_coverage=1 00:36:01.064 --rc genhtml_legend=1 00:36:01.064 --rc geninfo_all_blocks=1 00:36:01.064 --rc geninfo_unexecuted_blocks=1 00:36:01.064 00:36:01.064 ' 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.064 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:01.065 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:36:01.065 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:36:01.065 00:36:01.065 real 0m0.214s 00:36:01.065 user 0m0.131s 00:36:01.065 sys 0m0.096s 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:01.065 ************************************ 00:36:01.065 END TEST nvmf_discovery_remove_ifc 00:36:01.065 ************************************ 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.065 ************************************ 00:36:01.065 START TEST nvmf_identify_kernel_target 00:36:01.065 ************************************ 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:36:01.065 * Looking for test storage... 00:36:01.065 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:36:01.065 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.323 --rc genhtml_branch_coverage=1 00:36:01.323 --rc genhtml_function_coverage=1 00:36:01.323 --rc genhtml_legend=1 00:36:01.323 --rc geninfo_all_blocks=1 00:36:01.323 --rc geninfo_unexecuted_blocks=1 00:36:01.323 00:36:01.323 ' 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.323 --rc genhtml_branch_coverage=1 00:36:01.323 --rc genhtml_function_coverage=1 00:36:01.323 --rc genhtml_legend=1 00:36:01.323 --rc geninfo_all_blocks=1 00:36:01.323 --rc geninfo_unexecuted_blocks=1 00:36:01.323 00:36:01.323 ' 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.323 --rc genhtml_branch_coverage=1 00:36:01.323 --rc genhtml_function_coverage=1 00:36:01.323 --rc genhtml_legend=1 00:36:01.323 --rc geninfo_all_blocks=1 00:36:01.323 --rc geninfo_unexecuted_blocks=1 00:36:01.323 00:36:01.323 ' 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.323 --rc genhtml_branch_coverage=1 00:36:01.323 --rc genhtml_function_coverage=1 00:36:01.323 --rc genhtml_legend=1 00:36:01.323 --rc geninfo_all_blocks=1 00:36:01.323 --rc geninfo_unexecuted_blocks=1 00:36:01.323 00:36:01.323 ' 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.323 14:07:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.323 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:01.324 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:01.324 14:07:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:36:07.884 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:36:07.884 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:36:07.884 Found net devices under 0000:18:00.0: mlx_0_0 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:07.884 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:36:07.885 Found net devices under 0000:18:00.1: mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:36:07.885 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:07.885 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:36:07.885 altname enp24s0f0np0 00:36:07.885 altname ens785f0np0 00:36:07.885 inet 192.168.100.8/24 scope global mlx_0_0 00:36:07.885 valid_lft forever preferred_lft forever 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:36:07.885 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:07.885 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:36:07.885 altname enp24s0f1np1 00:36:07.885 altname ens785f1np1 00:36:07.885 inet 192.168.100.9/24 scope global mlx_0_1 00:36:07.885 valid_lft forever preferred_lft forever 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:36:07.885 192.168.100.9' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:36:07.885 192.168.100.9' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:36:07.885 192.168.100.9' 00:36:07.885 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:07.886 14:07:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:07.886 14:07:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:07.886 14:07:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:36:10.424 Waiting for block devices as requested 00:36:10.424 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:10.424 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:10.424 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:10.424 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:10.424 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:10.424 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:10.683 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:10.683 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:10.683 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:10.683 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:10.942 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:10.942 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:10.942 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:11.201 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:11.201 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:11.201 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:11.201 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:13.106 No valid GPT data, bailing 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:36:13.106 00:36:13.106 Discovery Log Number of Records 2, Generation counter 2 00:36:13.106 =====Discovery Log Entry 0====== 00:36:13.106 trtype: rdma 00:36:13.106 adrfam: ipv4 00:36:13.106 subtype: current discovery subsystem 00:36:13.106 treq: not specified, sq flow control disable supported 00:36:13.106 portid: 1 00:36:13.106 trsvcid: 4420 00:36:13.106 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:13.106 traddr: 192.168.100.8 00:36:13.106 eflags: none 00:36:13.106 rdma_prtype: not specified 00:36:13.106 rdma_qptype: connected 00:36:13.106 rdma_cms: rdma-cm 00:36:13.106 rdma_pkey: 0x0000 00:36:13.106 =====Discovery Log Entry 1====== 00:36:13.106 trtype: rdma 00:36:13.106 adrfam: ipv4 00:36:13.106 subtype: nvme subsystem 00:36:13.106 treq: not specified, sq flow control disable supported 00:36:13.106 portid: 1 00:36:13.106 trsvcid: 4420 00:36:13.106 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:13.106 traddr: 192.168.100.8 00:36:13.106 eflags: none 00:36:13.106 rdma_prtype: not specified 00:36:13.106 rdma_qptype: connected 00:36:13.106 rdma_cms: rdma-cm 00:36:13.106 rdma_pkey: 0x0000 00:36:13.106 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:36:13.106 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:13.106 ===================================================== 00:36:13.106 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:13.106 ===================================================== 00:36:13.106 Controller Capabilities/Features 00:36:13.106 ================================ 00:36:13.106 Vendor ID: 0000 00:36:13.106 Subsystem Vendor ID: 0000 00:36:13.106 Serial Number: 13c1277862dd962ee92c 00:36:13.106 Model Number: Linux 00:36:13.106 Firmware Version: 6.8.9-20 00:36:13.106 Recommended Arb Burst: 0 00:36:13.106 IEEE OUI Identifier: 00 00 00 00:36:13.106 Multi-path I/O 00:36:13.106 May have multiple subsystem ports: No 00:36:13.106 May have multiple controllers: No 00:36:13.106 Associated with SR-IOV VF: No 00:36:13.106 Max Data Transfer Size: Unlimited 00:36:13.106 Max Number of Namespaces: 0 00:36:13.106 Max Number of I/O Queues: 1024 00:36:13.106 NVMe Specification Version (VS): 1.3 00:36:13.106 NVMe Specification Version (Identify): 1.3 00:36:13.106 Maximum Queue Entries: 128 00:36:13.106 Contiguous Queues Required: No 00:36:13.106 Arbitration Mechanisms Supported 00:36:13.106 Weighted Round Robin: Not Supported 00:36:13.106 Vendor Specific: Not Supported 00:36:13.106 Reset Timeout: 7500 ms 00:36:13.106 Doorbell Stride: 4 bytes 00:36:13.106 NVM Subsystem Reset: Not Supported 00:36:13.106 Command Sets Supported 00:36:13.106 NVM Command Set: Supported 00:36:13.106 Boot Partition: Not Supported 00:36:13.106 Memory Page Size Minimum: 4096 bytes 00:36:13.106 Memory Page Size Maximum: 4096 bytes 00:36:13.106 Persistent Memory Region: Not Supported 00:36:13.106 Optional Asynchronous Events Supported 00:36:13.106 Namespace Attribute Notices: Not Supported 00:36:13.106 Firmware Activation Notices: Not Supported 00:36:13.106 ANA Change Notices: Not Supported 00:36:13.106 PLE Aggregate Log Change Notices: Not Supported 00:36:13.106 LBA Status Info Alert Notices: Not Supported 00:36:13.106 EGE Aggregate Log Change Notices: Not Supported 00:36:13.106 Normal NVM Subsystem Shutdown event: Not Supported 00:36:13.106 Zone Descriptor Change Notices: Not Supported 00:36:13.106 Discovery Log Change Notices: Supported 00:36:13.106 Controller Attributes 00:36:13.106 128-bit Host Identifier: Not Supported 00:36:13.106 Non-Operational Permissive Mode: Not Supported 00:36:13.106 NVM Sets: Not Supported 00:36:13.106 Read Recovery Levels: Not Supported 00:36:13.106 Endurance Groups: Not Supported 00:36:13.106 Predictable Latency Mode: Not Supported 00:36:13.106 Traffic Based Keep ALive: Not Supported 00:36:13.106 Namespace Granularity: Not Supported 00:36:13.106 SQ Associations: Not Supported 00:36:13.106 UUID List: Not Supported 00:36:13.106 Multi-Domain Subsystem: Not Supported 00:36:13.106 Fixed Capacity Management: Not Supported 00:36:13.106 Variable Capacity Management: Not Supported 00:36:13.106 Delete Endurance Group: Not Supported 00:36:13.106 Delete NVM Set: Not Supported 00:36:13.106 Extended LBA Formats Supported: Not Supported 00:36:13.106 Flexible Data Placement Supported: Not Supported 00:36:13.106 00:36:13.106 Controller Memory Buffer Support 00:36:13.107 ================================ 00:36:13.107 Supported: No 00:36:13.107 00:36:13.107 Persistent Memory Region Support 00:36:13.107 ================================ 00:36:13.107 Supported: No 00:36:13.107 00:36:13.107 Admin Command Set Attributes 00:36:13.107 ============================ 00:36:13.107 Security Send/Receive: Not Supported 00:36:13.107 Format NVM: Not Supported 00:36:13.107 Firmware Activate/Download: Not Supported 00:36:13.107 Namespace Management: Not Supported 00:36:13.107 Device Self-Test: Not Supported 00:36:13.107 Directives: Not Supported 00:36:13.107 NVMe-MI: Not Supported 00:36:13.107 Virtualization Management: Not Supported 00:36:13.107 Doorbell Buffer Config: Not Supported 00:36:13.107 Get LBA Status Capability: Not Supported 00:36:13.107 Command & Feature Lockdown Capability: Not Supported 00:36:13.107 Abort Command Limit: 1 00:36:13.107 Async Event Request Limit: 1 00:36:13.107 Number of Firmware Slots: N/A 00:36:13.107 Firmware Slot 1 Read-Only: N/A 00:36:13.107 Firmware Activation Without Reset: N/A 00:36:13.107 Multiple Update Detection Support: N/A 00:36:13.107 Firmware Update Granularity: No Information Provided 00:36:13.107 Per-Namespace SMART Log: No 00:36:13.107 Asymmetric Namespace Access Log Page: Not Supported 00:36:13.107 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:13.107 Command Effects Log Page: Not Supported 00:36:13.107 Get Log Page Extended Data: Supported 00:36:13.107 Telemetry Log Pages: Not Supported 00:36:13.107 Persistent Event Log Pages: Not Supported 00:36:13.107 Supported Log Pages Log Page: May Support 00:36:13.107 Commands Supported & Effects Log Page: Not Supported 00:36:13.107 Feature Identifiers & Effects Log Page:May Support 00:36:13.107 NVMe-MI Commands & Effects Log Page: May Support 00:36:13.107 Data Area 4 for Telemetry Log: Not Supported 00:36:13.107 Error Log Page Entries Supported: 1 00:36:13.107 Keep Alive: Not Supported 00:36:13.107 00:36:13.107 NVM Command Set Attributes 00:36:13.107 ========================== 00:36:13.107 Submission Queue Entry Size 00:36:13.107 Max: 1 00:36:13.107 Min: 1 00:36:13.107 Completion Queue Entry Size 00:36:13.107 Max: 1 00:36:13.107 Min: 1 00:36:13.107 Number of Namespaces: 0 00:36:13.107 Compare Command: Not Supported 00:36:13.107 Write Uncorrectable Command: Not Supported 00:36:13.107 Dataset Management Command: Not Supported 00:36:13.107 Write Zeroes Command: Not Supported 00:36:13.107 Set Features Save Field: Not Supported 00:36:13.107 Reservations: Not Supported 00:36:13.107 Timestamp: Not Supported 00:36:13.107 Copy: Not Supported 00:36:13.107 Volatile Write Cache: Not Present 00:36:13.107 Atomic Write Unit (Normal): 1 00:36:13.107 Atomic Write Unit (PFail): 1 00:36:13.107 Atomic Compare & Write Unit: 1 00:36:13.107 Fused Compare & Write: Not Supported 00:36:13.107 Scatter-Gather List 00:36:13.107 SGL Command Set: Supported 00:36:13.107 SGL Keyed: Supported 00:36:13.107 SGL Bit Bucket Descriptor: Not Supported 00:36:13.107 SGL Metadata Pointer: Not Supported 00:36:13.107 Oversized SGL: Not Supported 00:36:13.107 SGL Metadata Address: Not Supported 00:36:13.107 SGL Offset: Supported 00:36:13.107 Transport SGL Data Block: Not Supported 00:36:13.107 Replay Protected Memory Block: Not Supported 00:36:13.107 00:36:13.107 Firmware Slot Information 00:36:13.107 ========================= 00:36:13.107 Active slot: 0 00:36:13.107 00:36:13.107 00:36:13.107 Error Log 00:36:13.107 ========= 00:36:13.107 00:36:13.107 Active Namespaces 00:36:13.107 ================= 00:36:13.107 Discovery Log Page 00:36:13.107 ================== 00:36:13.107 Generation Counter: 2 00:36:13.107 Number of Records: 2 00:36:13.107 Record Format: 0 00:36:13.107 00:36:13.107 Discovery Log Entry 0 00:36:13.107 ---------------------- 00:36:13.107 Transport Type: 1 (RDMA) 00:36:13.107 Address Family: 1 (IPv4) 00:36:13.107 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:13.107 Entry Flags: 00:36:13.107 Duplicate Returned Information: 0 00:36:13.107 Explicit Persistent Connection Support for Discovery: 0 00:36:13.107 Transport Requirements: 00:36:13.107 Secure Channel: Not Specified 00:36:13.107 Port ID: 1 (0x0001) 00:36:13.107 Controller ID: 65535 (0xffff) 00:36:13.107 Admin Max SQ Size: 32 00:36:13.107 Transport Service Identifier: 4420 00:36:13.107 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:13.107 Transport Address: 192.168.100.8 00:36:13.107 Transport Specific Address Subtype - RDMA 00:36:13.107 RDMA QP Service Type: 1 (Reliable Connected) 00:36:13.107 RDMA Provider Type: 1 (No provider specified) 00:36:13.107 RDMA CM Service: 1 (RDMA_CM) 00:36:13.107 Discovery Log Entry 1 00:36:13.107 ---------------------- 00:36:13.107 Transport Type: 1 (RDMA) 00:36:13.107 Address Family: 1 (IPv4) 00:36:13.107 Subsystem Type: 2 (NVM Subsystem) 00:36:13.107 Entry Flags: 00:36:13.107 Duplicate Returned Information: 0 00:36:13.107 Explicit Persistent Connection Support for Discovery: 0 00:36:13.107 Transport Requirements: 00:36:13.107 Secure Channel: Not Specified 00:36:13.107 Port ID: 1 (0x0001) 00:36:13.107 Controller ID: 65535 (0xffff) 00:36:13.107 Admin Max SQ Size: 32 00:36:13.107 Transport Service Identifier: 4420 00:36:13.107 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:13.107 Transport Address: 192.168.100.8 00:36:13.107 Transport Specific Address Subtype - RDMA 00:36:13.107 RDMA QP Service Type: 1 (Reliable Connected) 00:36:13.107 RDMA Provider Type: 1 (No provider specified) 00:36:13.107 RDMA CM Service: 1 (RDMA_CM) 00:36:13.107 14:07:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:13.367 get_feature(0x01) failed 00:36:13.367 get_feature(0x02) failed 00:36:13.367 get_feature(0x04) failed 00:36:13.367 ===================================================== 00:36:13.367 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:36:13.367 ===================================================== 00:36:13.367 Controller Capabilities/Features 00:36:13.367 ================================ 00:36:13.367 Vendor ID: 0000 00:36:13.367 Subsystem Vendor ID: 0000 00:36:13.367 Serial Number: 13e2b61cf22d7f4136d3 00:36:13.367 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:13.367 Firmware Version: 6.8.9-20 00:36:13.367 Recommended Arb Burst: 6 00:36:13.367 IEEE OUI Identifier: 00 00 00 00:36:13.367 Multi-path I/O 00:36:13.367 May have multiple subsystem ports: Yes 00:36:13.367 May have multiple controllers: Yes 00:36:13.367 Associated with SR-IOV VF: No 00:36:13.367 Max Data Transfer Size: 1048576 00:36:13.367 Max Number of Namespaces: 1024 00:36:13.367 Max Number of I/O Queues: 128 00:36:13.367 NVMe Specification Version (VS): 1.3 00:36:13.367 NVMe Specification Version (Identify): 1.3 00:36:13.367 Maximum Queue Entries: 128 00:36:13.367 Contiguous Queues Required: No 00:36:13.367 Arbitration Mechanisms Supported 00:36:13.367 Weighted Round Robin: Not Supported 00:36:13.367 Vendor Specific: Not Supported 00:36:13.367 Reset Timeout: 7500 ms 00:36:13.367 Doorbell Stride: 4 bytes 00:36:13.367 NVM Subsystem Reset: Not Supported 00:36:13.367 Command Sets Supported 00:36:13.367 NVM Command Set: Supported 00:36:13.367 Boot Partition: Not Supported 00:36:13.367 Memory Page Size Minimum: 4096 bytes 00:36:13.367 Memory Page Size Maximum: 4096 bytes 00:36:13.367 Persistent Memory Region: Not Supported 00:36:13.367 Optional Asynchronous Events Supported 00:36:13.367 Namespace Attribute Notices: Supported 00:36:13.367 Firmware Activation Notices: Not Supported 00:36:13.367 ANA Change Notices: Supported 00:36:13.367 PLE Aggregate Log Change Notices: Not Supported 00:36:13.367 LBA Status Info Alert Notices: Not Supported 00:36:13.367 EGE Aggregate Log Change Notices: Not Supported 00:36:13.367 Normal NVM Subsystem Shutdown event: Not Supported 00:36:13.367 Zone Descriptor Change Notices: Not Supported 00:36:13.367 Discovery Log Change Notices: Not Supported 00:36:13.367 Controller Attributes 00:36:13.367 128-bit Host Identifier: Supported 00:36:13.367 Non-Operational Permissive Mode: Not Supported 00:36:13.367 NVM Sets: Not Supported 00:36:13.367 Read Recovery Levels: Not Supported 00:36:13.367 Endurance Groups: Not Supported 00:36:13.367 Predictable Latency Mode: Not Supported 00:36:13.367 Traffic Based Keep ALive: Supported 00:36:13.367 Namespace Granularity: Not Supported 00:36:13.367 SQ Associations: Not Supported 00:36:13.367 UUID List: Not Supported 00:36:13.367 Multi-Domain Subsystem: Not Supported 00:36:13.367 Fixed Capacity Management: Not Supported 00:36:13.367 Variable Capacity Management: Not Supported 00:36:13.367 Delete Endurance Group: Not Supported 00:36:13.367 Delete NVM Set: Not Supported 00:36:13.367 Extended LBA Formats Supported: Not Supported 00:36:13.367 Flexible Data Placement Supported: Not Supported 00:36:13.367 00:36:13.367 Controller Memory Buffer Support 00:36:13.367 ================================ 00:36:13.367 Supported: No 00:36:13.367 00:36:13.367 Persistent Memory Region Support 00:36:13.367 ================================ 00:36:13.367 Supported: No 00:36:13.367 00:36:13.367 Admin Command Set Attributes 00:36:13.367 ============================ 00:36:13.367 Security Send/Receive: Not Supported 00:36:13.367 Format NVM: Not Supported 00:36:13.367 Firmware Activate/Download: Not Supported 00:36:13.367 Namespace Management: Not Supported 00:36:13.367 Device Self-Test: Not Supported 00:36:13.367 Directives: Not Supported 00:36:13.367 NVMe-MI: Not Supported 00:36:13.367 Virtualization Management: Not Supported 00:36:13.367 Doorbell Buffer Config: Not Supported 00:36:13.367 Get LBA Status Capability: Not Supported 00:36:13.367 Command & Feature Lockdown Capability: Not Supported 00:36:13.367 Abort Command Limit: 4 00:36:13.367 Async Event Request Limit: 4 00:36:13.367 Number of Firmware Slots: N/A 00:36:13.367 Firmware Slot 1 Read-Only: N/A 00:36:13.367 Firmware Activation Without Reset: N/A 00:36:13.367 Multiple Update Detection Support: N/A 00:36:13.367 Firmware Update Granularity: No Information Provided 00:36:13.367 Per-Namespace SMART Log: Yes 00:36:13.367 Asymmetric Namespace Access Log Page: Supported 00:36:13.367 ANA Transition Time : 10 sec 00:36:13.367 00:36:13.367 Asymmetric Namespace Access Capabilities 00:36:13.367 ANA Optimized State : Supported 00:36:13.367 ANA Non-Optimized State : Supported 00:36:13.367 ANA Inaccessible State : Supported 00:36:13.367 ANA Persistent Loss State : Supported 00:36:13.367 ANA Change State : Supported 00:36:13.367 ANAGRPID is not changed : No 00:36:13.367 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:13.367 00:36:13.367 ANA Group Identifier Maximum : 128 00:36:13.367 Number of ANA Group Identifiers : 128 00:36:13.367 Max Number of Allowed Namespaces : 1024 00:36:13.367 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:13.367 Command Effects Log Page: Supported 00:36:13.367 Get Log Page Extended Data: Supported 00:36:13.367 Telemetry Log Pages: Not Supported 00:36:13.367 Persistent Event Log Pages: Not Supported 00:36:13.367 Supported Log Pages Log Page: May Support 00:36:13.368 Commands Supported & Effects Log Page: Not Supported 00:36:13.368 Feature Identifiers & Effects Log Page:May Support 00:36:13.368 NVMe-MI Commands & Effects Log Page: May Support 00:36:13.368 Data Area 4 for Telemetry Log: Not Supported 00:36:13.368 Error Log Page Entries Supported: 128 00:36:13.368 Keep Alive: Supported 00:36:13.368 Keep Alive Granularity: 1000 ms 00:36:13.368 00:36:13.368 NVM Command Set Attributes 00:36:13.368 ========================== 00:36:13.368 Submission Queue Entry Size 00:36:13.368 Max: 64 00:36:13.368 Min: 64 00:36:13.368 Completion Queue Entry Size 00:36:13.368 Max: 16 00:36:13.368 Min: 16 00:36:13.368 Number of Namespaces: 1024 00:36:13.368 Compare Command: Not Supported 00:36:13.368 Write Uncorrectable Command: Not Supported 00:36:13.368 Dataset Management Command: Supported 00:36:13.368 Write Zeroes Command: Supported 00:36:13.368 Set Features Save Field: Not Supported 00:36:13.368 Reservations: Not Supported 00:36:13.368 Timestamp: Not Supported 00:36:13.368 Copy: Not Supported 00:36:13.368 Volatile Write Cache: Present 00:36:13.368 Atomic Write Unit (Normal): 1 00:36:13.368 Atomic Write Unit (PFail): 1 00:36:13.368 Atomic Compare & Write Unit: 1 00:36:13.368 Fused Compare & Write: Not Supported 00:36:13.368 Scatter-Gather List 00:36:13.368 SGL Command Set: Supported 00:36:13.368 SGL Keyed: Supported 00:36:13.368 SGL Bit Bucket Descriptor: Not Supported 00:36:13.368 SGL Metadata Pointer: Not Supported 00:36:13.368 Oversized SGL: Not Supported 00:36:13.368 SGL Metadata Address: Not Supported 00:36:13.368 SGL Offset: Supported 00:36:13.368 Transport SGL Data Block: Not Supported 00:36:13.368 Replay Protected Memory Block: Not Supported 00:36:13.368 00:36:13.368 Firmware Slot Information 00:36:13.368 ========================= 00:36:13.368 Active slot: 0 00:36:13.368 00:36:13.368 Asymmetric Namespace Access 00:36:13.368 =========================== 00:36:13.368 Change Count : 0 00:36:13.368 Number of ANA Group Descriptors : 1 00:36:13.368 ANA Group Descriptor : 0 00:36:13.368 ANA Group ID : 1 00:36:13.368 Number of NSID Values : 1 00:36:13.368 Change Count : 0 00:36:13.368 ANA State : 1 00:36:13.368 Namespace Identifier : 1 00:36:13.368 00:36:13.368 Commands Supported and Effects 00:36:13.368 ============================== 00:36:13.368 Admin Commands 00:36:13.368 -------------- 00:36:13.368 Get Log Page (02h): Supported 00:36:13.368 Identify (06h): Supported 00:36:13.368 Abort (08h): Supported 00:36:13.368 Set Features (09h): Supported 00:36:13.368 Get Features (0Ah): Supported 00:36:13.368 Asynchronous Event Request (0Ch): Supported 00:36:13.368 Keep Alive (18h): Supported 00:36:13.368 I/O Commands 00:36:13.368 ------------ 00:36:13.368 Flush (00h): Supported 00:36:13.368 Write (01h): Supported LBA-Change 00:36:13.368 Read (02h): Supported 00:36:13.368 Write Zeroes (08h): Supported LBA-Change 00:36:13.368 Dataset Management (09h): Supported 00:36:13.368 00:36:13.368 Error Log 00:36:13.368 ========= 00:36:13.368 Entry: 0 00:36:13.368 Error Count: 0x3 00:36:13.368 Submission Queue Id: 0x0 00:36:13.368 Command Id: 0x5 00:36:13.368 Phase Bit: 0 00:36:13.368 Status Code: 0x2 00:36:13.368 Status Code Type: 0x0 00:36:13.368 Do Not Retry: 1 00:36:13.368 Error Location: 0x28 00:36:13.368 LBA: 0x0 00:36:13.368 Namespace: 0x0 00:36:13.368 Vendor Log Page: 0x0 00:36:13.368 ----------- 00:36:13.368 Entry: 1 00:36:13.368 Error Count: 0x2 00:36:13.368 Submission Queue Id: 0x0 00:36:13.368 Command Id: 0x5 00:36:13.368 Phase Bit: 0 00:36:13.368 Status Code: 0x2 00:36:13.368 Status Code Type: 0x0 00:36:13.368 Do Not Retry: 1 00:36:13.368 Error Location: 0x28 00:36:13.368 LBA: 0x0 00:36:13.368 Namespace: 0x0 00:36:13.368 Vendor Log Page: 0x0 00:36:13.368 ----------- 00:36:13.368 Entry: 2 00:36:13.368 Error Count: 0x1 00:36:13.368 Submission Queue Id: 0x0 00:36:13.368 Command Id: 0x0 00:36:13.368 Phase Bit: 0 00:36:13.368 Status Code: 0x2 00:36:13.368 Status Code Type: 0x0 00:36:13.368 Do Not Retry: 1 00:36:13.368 Error Location: 0x28 00:36:13.368 LBA: 0x0 00:36:13.368 Namespace: 0x0 00:36:13.368 Vendor Log Page: 0x0 00:36:13.368 00:36:13.368 Number of Queues 00:36:13.368 ================ 00:36:13.368 Number of I/O Submission Queues: 128 00:36:13.368 Number of I/O Completion Queues: 128 00:36:13.368 00:36:13.368 ZNS Specific Controller Data 00:36:13.368 ============================ 00:36:13.368 Zone Append Size Limit: 0 00:36:13.368 00:36:13.368 00:36:13.368 Active Namespaces 00:36:13.368 ================= 00:36:13.368 get_feature(0x05) failed 00:36:13.368 Namespace ID:1 00:36:13.368 Command Set Identifier: NVM (00h) 00:36:13.368 Deallocate: Supported 00:36:13.368 Deallocated/Unwritten Error: Not Supported 00:36:13.368 Deallocated Read Value: Unknown 00:36:13.368 Deallocate in Write Zeroes: Not Supported 00:36:13.368 Deallocated Guard Field: 0xFFFF 00:36:13.368 Flush: Supported 00:36:13.368 Reservation: Not Supported 00:36:13.368 Namespace Sharing Capabilities: Multiple Controllers 00:36:13.368 Size (in LBAs): 7814037168 (3726GiB) 00:36:13.368 Capacity (in LBAs): 7814037168 (3726GiB) 00:36:13.368 Utilization (in LBAs): 7814037168 (3726GiB) 00:36:13.368 UUID: 3f88e34d-c7fd-413c-b21b-1237d0a784aa 00:36:13.368 Thin Provisioning: Not Supported 00:36:13.368 Per-NS Atomic Units: Yes 00:36:13.368 Atomic Boundary Size (Normal): 0 00:36:13.368 Atomic Boundary Size (PFail): 0 00:36:13.368 Atomic Boundary Offset: 0 00:36:13.368 NGUID/EUI64 Never Reused: No 00:36:13.368 ANA group ID: 1 00:36:13.368 Namespace Write Protected: No 00:36:13.368 Number of LBA Formats: 1 00:36:13.368 Current LBA Format: LBA Format #00 00:36:13.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:13.368 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:36:13.368 rmmod nvme_rdma 00:36:13.368 rmmod nvme_fabrics 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:13.368 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:36:13.369 14:07:13 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:16.658 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:16.658 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:19.950 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:36:21.328 00:36:21.328 real 0m20.124s 00:36:21.328 user 0m5.431s 00:36:21.328 sys 0m10.483s 00:36:21.328 14:07:20 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.328 14:07:20 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.328 ************************************ 00:36:21.328 END TEST nvmf_identify_kernel_target 00:36:21.328 ************************************ 00:36:21.328 14:07:20 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:36:21.328 14:07:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:21.328 14:07:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.328 14:07:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.328 ************************************ 00:36:21.328 START TEST nvmf_auth_host 00:36:21.328 ************************************ 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:36:21.328 * Looking for test storage... 00:36:21.328 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:21.328 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:21.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.588 --rc genhtml_branch_coverage=1 00:36:21.588 --rc genhtml_function_coverage=1 00:36:21.588 --rc genhtml_legend=1 00:36:21.588 --rc geninfo_all_blocks=1 00:36:21.588 --rc geninfo_unexecuted_blocks=1 00:36:21.588 00:36:21.588 ' 00:36:21.588 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:21.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.588 --rc genhtml_branch_coverage=1 00:36:21.588 --rc genhtml_function_coverage=1 00:36:21.588 --rc genhtml_legend=1 00:36:21.588 --rc geninfo_all_blocks=1 00:36:21.589 --rc geninfo_unexecuted_blocks=1 00:36:21.589 00:36:21.589 ' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:21.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.589 --rc genhtml_branch_coverage=1 00:36:21.589 --rc genhtml_function_coverage=1 00:36:21.589 --rc genhtml_legend=1 00:36:21.589 --rc geninfo_all_blocks=1 00:36:21.589 --rc geninfo_unexecuted_blocks=1 00:36:21.589 00:36:21.589 ' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:21.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.589 --rc genhtml_branch_coverage=1 00:36:21.589 --rc genhtml_function_coverage=1 00:36:21.589 --rc genhtml_legend=1 00:36:21.589 --rc geninfo_all_blocks=1 00:36:21.589 --rc geninfo_unexecuted_blocks=1 00:36:21.589 00:36:21.589 ' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:21.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:21.589 14:07:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:28.160 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:36:28.161 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:36:28.161 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:36:28.161 Found net devices under 0000:18:00.0: mlx_0_0 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:36:28.161 Found net devices under 0000:18:00.1: mlx_0_1 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:36:28.161 14:07:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:36:28.161 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:28.161 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:36:28.161 altname enp24s0f0np0 00:36:28.161 altname ens785f0np0 00:36:28.161 inet 192.168.100.8/24 scope global mlx_0_0 00:36:28.161 valid_lft forever preferred_lft forever 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:36:28.161 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:36:28.161 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:28.161 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:36:28.161 altname enp24s0f1np1 00:36:28.161 altname ens785f1np1 00:36:28.161 inet 192.168.100.9/24 scope global mlx_0_1 00:36:28.161 valid_lft forever preferred_lft forever 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:36:28.162 192.168.100.9' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:36:28.162 192.168.100.9' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:36:28.162 192.168.100.9' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1938003 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1938003 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1938003 ']' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ec48eb522dbf4c807a79151a92443f6c 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.EiG 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ec48eb522dbf4c807a79151a92443f6c 0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ec48eb522dbf4c807a79151a92443f6c 0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ec48eb522dbf4c807a79151a92443f6c 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.EiG 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.EiG 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.EiG 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2585756fac08f44cb949bccc8e0e5d94693e69955c0fdeee61033858839b393c 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ATR 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2585756fac08f44cb949bccc8e0e5d94693e69955c0fdeee61033858839b393c 3 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2585756fac08f44cb949bccc8e0e5d94693e69955c0fdeee61033858839b393c 3 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2585756fac08f44cb949bccc8e0e5d94693e69955c0fdeee61033858839b393c 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ATR 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ATR 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ATR 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.162 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3e06f6dacd655104e7b4b2935aea653627351373776c7df7 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.9v3 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3e06f6dacd655104e7b4b2935aea653627351373776c7df7 0 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3e06f6dacd655104e7b4b2935aea653627351373776c7df7 0 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3e06f6dacd655104e7b4b2935aea653627351373776c7df7 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.9v3 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.9v3 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9v3 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ad9f6869d0726b2da766afa0732105c7f400d84710042752 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Tc4 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ad9f6869d0726b2da766afa0732105c7f400d84710042752 2 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ad9f6869d0726b2da766afa0732105c7f400d84710042752 2 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ad9f6869d0726b2da766afa0732105c7f400d84710042752 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Tc4 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Tc4 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Tc4 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eca86743299f66dbfb658944a9c5c1ab 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.AvQ 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eca86743299f66dbfb658944a9c5c1ab 1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eca86743299f66dbfb658944a9c5c1ab 1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eca86743299f66dbfb658944a9c5c1ab 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.AvQ 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.AvQ 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.AvQ 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3818d37dd61cc1714b1254a940dee613 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HFW 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3818d37dd61cc1714b1254a940dee613 1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3818d37dd61cc1714b1254a940dee613 1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3818d37dd61cc1714b1254a940dee613 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HFW 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HFW 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HFW 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f64758dfd801adc7a70a1007b87af064e737fada42e0665c 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PiR 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f64758dfd801adc7a70a1007b87af064e737fada42e0665c 2 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f64758dfd801adc7a70a1007b87af064e737fada42e0665c 2 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f64758dfd801adc7a70a1007b87af064e737fada42e0665c 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PiR 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PiR 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.PiR 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0526d5dc515a013b4d2a7b75864db77d 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1um 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0526d5dc515a013b4d2a7b75864db77d 0 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0526d5dc515a013b4d2a7b75864db77d 0 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.163 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0526d5dc515a013b4d2a7b75864db77d 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1um 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1um 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1um 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=55e75b189b3d71c4884ba909ba2007f099dc6d1b846f9e3aa7560f09fcc54bb3 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YLr 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 55e75b189b3d71c4884ba909ba2007f099dc6d1b846f9e3aa7560f09fcc54bb3 3 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 55e75b189b3d71c4884ba909ba2007f099dc6d1b846f9e3aa7560f09fcc54bb3 3 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=55e75b189b3d71c4884ba909ba2007f099dc6d1b846f9e3aa7560f09fcc54bb3 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YLr 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YLr 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.YLr 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1938003 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1938003 ']' 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.164 14:07:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EiG 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ATR ]] 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ATR 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9v3 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Tc4 ]] 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Tc4 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.AvQ 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HFW ]] 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HFW 00:36:28.423 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.PiR 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1um ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1um 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.YLr 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:28.424 14:07:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:36:31.711 Waiting for block devices as requested 00:36:31.711 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:31.711 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:31.711 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:31.711 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:31.711 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:31.711 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:31.711 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:31.711 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:31.711 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:31.970 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:31.970 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:31.970 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:31.970 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:32.229 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:32.229 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:32.229 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:32.229 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:34.766 No valid GPT data, bailing 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:34.766 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:36:34.766 00:36:34.766 Discovery Log Number of Records 2, Generation counter 2 00:36:34.766 =====Discovery Log Entry 0====== 00:36:34.766 trtype: rdma 00:36:34.766 adrfam: ipv4 00:36:34.766 subtype: current discovery subsystem 00:36:34.766 treq: not specified, sq flow control disable supported 00:36:34.766 portid: 1 00:36:34.766 trsvcid: 4420 00:36:34.766 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:34.766 traddr: 192.168.100.8 00:36:34.766 eflags: none 00:36:34.766 rdma_prtype: not specified 00:36:34.767 rdma_qptype: connected 00:36:34.767 rdma_cms: rdma-cm 00:36:34.767 rdma_pkey: 0x0000 00:36:34.767 =====Discovery Log Entry 1====== 00:36:34.767 trtype: rdma 00:36:34.767 adrfam: ipv4 00:36:34.767 subtype: nvme subsystem 00:36:34.767 treq: not specified, sq flow control disable supported 00:36:34.767 portid: 1 00:36:34.767 trsvcid: 4420 00:36:34.767 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:34.767 traddr: 192.168.100.8 00:36:34.767 eflags: none 00:36:34.767 rdma_prtype: not specified 00:36:34.767 rdma_qptype: connected 00:36:34.767 rdma_cms: rdma-cm 00:36:34.767 rdma_pkey: 0x0000 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.767 nvme0n1 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:34.767 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.768 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.027 nvme0n1 00:36:35.027 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.027 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.027 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.027 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.027 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.027 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.027 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.027 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.028 14:07:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.286 nvme0n1 00:36:35.286 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.286 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.286 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.286 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.286 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.286 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.286 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.286 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.287 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.545 nvme0n1 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:35.545 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.546 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.804 nvme0n1 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:35.804 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.805 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.064 nvme0n1 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.064 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.065 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.324 14:07:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.324 nvme0n1 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.324 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.583 nvme0n1 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.583 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:36.843 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:36.844 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:36.844 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:36.844 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.844 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.103 nvme0n1 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.103 14:07:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.362 nvme0n1 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:37.362 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.363 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.622 nvme0n1 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.622 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.881 nvme0n1 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:37.881 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.882 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.140 nvme0n1 00:36:38.140 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.399 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.399 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.399 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.399 14:07:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.399 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.658 nvme0n1 00:36:38.658 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.658 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.658 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.658 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.658 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.658 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.658 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.658 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.659 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.918 nvme0n1 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.918 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.177 14:07:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.436 nvme0n1 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:39.436 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.437 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.696 nvme0n1 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.696 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.955 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.214 nvme0n1 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.214 14:07:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.214 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.781 nvme0n1 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:40.781 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.782 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.040 nvme0n1 00:36:41.040 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.040 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.040 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.040 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.040 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.040 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.300 14:07:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.559 nvme0n1 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.559 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.127 nvme0n1 00:36:42.127 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.127 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.127 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.127 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.127 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.127 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.127 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.386 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.386 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.386 14:07:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:42.386 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.387 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.954 nvme0n1 00:36:42.954 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.954 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.954 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.954 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.954 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.954 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.954 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.955 14:07:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.523 nvme0n1 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:43.523 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.524 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.092 nvme0n1 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.092 14:07:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.660 nvme0n1 00:36:44.660 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.660 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.660 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.660 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.660 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.660 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.920 nvme0n1 00:36:44.920 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.921 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.921 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.921 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.921 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.921 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.180 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.180 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.180 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.180 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.180 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.181 14:07:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.181 nvme0n1 00:36:45.181 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.181 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.181 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.181 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.181 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.181 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.453 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.453 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.453 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.453 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.453 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.453 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.453 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:45.453 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.454 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.455 nvme0n1 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.455 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.723 nvme0n1 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.723 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.982 nvme0n1 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.982 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.241 14:07:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.500 nvme0n1 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.500 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.759 nvme0n1 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.759 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.019 nvme0n1 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.019 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.020 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.279 nvme0n1 00:36:47.279 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.279 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.279 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.279 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.279 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.279 14:07:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.279 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.539 nvme0n1 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.539 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.797 nvme0n1 00:36:47.797 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.797 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.797 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.797 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.797 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.797 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.056 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.314 nvme0n1 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.314 14:07:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:48.314 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.315 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.573 nvme0n1 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.573 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.831 nvme0n1 00:36:48.831 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.831 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.831 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.831 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.831 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.831 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.089 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.347 nvme0n1 00:36:49.347 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.347 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.347 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.347 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.347 14:07:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.347 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.661 nvme0n1 00:36:49.661 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.661 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.661 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.661 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.661 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.661 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.918 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.176 nvme0n1 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:50.176 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.177 14:07:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.742 nvme0n1 00:36:50.742 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.742 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.742 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.742 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.742 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.743 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.001 nvme0n1 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.001 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.258 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.258 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.258 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:51.258 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.258 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:51.258 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:51.258 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.259 14:07:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.518 nvme0n1 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.518 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.086 nvme0n1 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.086 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:52.347 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:52.348 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.348 14:07:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.132 nvme0n1 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:53.132 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.133 14:07:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.392 nvme0n1 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:53.392 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.393 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.961 nvme0n1 00:36:53.961 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.961 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.961 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.961 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.961 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.961 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.221 14:07:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.789 nvme0n1 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:54.789 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.790 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.049 nvme0n1 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.049 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.308 nvme0n1 00:36:55.308 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.308 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.308 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.308 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.308 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.308 14:07:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.308 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.309 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.567 nvme0n1 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.567 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.825 nvme0n1 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:55.825 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.826 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.084 nvme0n1 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:56.084 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.085 14:07:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.344 nvme0n1 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.344 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.602 nvme0n1 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.602 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.603 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.861 nvme0n1 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.861 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.121 nvme0n1 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.121 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.380 14:07:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:57.380 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.381 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.641 nvme0n1 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.641 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.901 nvme0n1 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.901 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.160 nvme0n1 00:36:58.160 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.160 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.160 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.160 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.160 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.160 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.160 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.160 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.161 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.161 14:07:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.419 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:58.420 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:58.420 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:58.420 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:58.420 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:58.420 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:58.420 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.420 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.678 nvme0n1 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:58.678 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.679 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.939 nvme0n1 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.939 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.198 nvme0n1 00:36:59.198 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.198 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.198 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:59.198 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.198 14:07:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:59.198 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.458 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.718 nvme0n1 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:59.718 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:36:59.719 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:36:59.719 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:36:59.719 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:59.719 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.719 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.286 nvme0n1 00:37:00.286 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.286 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.286 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.286 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.286 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.287 14:07:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.546 nvme0n1 00:37:00.546 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.546 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.546 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.546 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.546 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.546 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.806 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.065 nvme0n1 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.066 14:08:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.633 nvme0n1 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:01.633 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWM0OGViNTIyZGJmNGM4MDdhNzkxNTFhOTI0NDNmNmOjM+Tl: 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: ]] 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjU4NTc1NmZhYzA4ZjQ0Y2I5NDliY2NjOGUwZTVkOTQ2OTNlNjk5NTVjMGZkZWVlNjEwMzM4NTg4MzliMzkzY5vPHe0=: 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.634 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.201 nvme0n1 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.201 14:08:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.770 nvme0n1 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.770 14:08:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.338 nvme0n1 00:37:03.338 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.338 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.338 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.339 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.339 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.339 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjY0NzU4ZGZkODAxYWRjN2E3MGExMDA3Yjg3YWYwNjRlNzM3ZmFkYTQyZTA2NjVjOSSd0Q==: 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: ]] 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDUyNmQ1ZGM1MTVhMDEzYjRkMmE3Yjc1ODY0ZGI3N2QxlQ6S: 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.598 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.167 nvme0n1 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTVlNzViMTg5YjNkNzFjNDg4NGJhOTA5YmEyMDA3ZjA5OWRjNmQxYjg0NmY5ZTNhYTc1NjBmMDlmY2M1NGJiM1abWWA=: 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.167 14:08:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.736 nvme0n1 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.736 request: 00:37:04.736 { 00:37:04.736 "name": "nvme0", 00:37:04.736 "trtype": "rdma", 00:37:04.736 "traddr": "192.168.100.8", 00:37:04.736 "adrfam": "ipv4", 00:37:04.736 "trsvcid": "4420", 00:37:04.736 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:04.736 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:04.736 "prchk_reftag": false, 00:37:04.736 "prchk_guard": false, 00:37:04.736 "hdgst": false, 00:37:04.736 "ddgst": false, 00:37:04.736 "allow_unrecognized_csi": false, 00:37:04.736 "method": "bdev_nvme_attach_controller", 00:37:04.736 "req_id": 1 00:37:04.736 } 00:37:04.736 Got JSON-RPC error response 00:37:04.736 response: 00:37:04.736 { 00:37:04.736 "code": -5, 00:37:04.736 "message": "Input/output error" 00:37:04.736 } 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.736 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.999 request: 00:37:04.999 { 00:37:04.999 "name": "nvme0", 00:37:04.999 "trtype": "rdma", 00:37:04.999 "traddr": "192.168.100.8", 00:37:04.999 "adrfam": "ipv4", 00:37:04.999 "trsvcid": "4420", 00:37:04.999 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:04.999 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:04.999 "prchk_reftag": false, 00:37:04.999 "prchk_guard": false, 00:37:04.999 "hdgst": false, 00:37:04.999 "ddgst": false, 00:37:04.999 "dhchap_key": "key2", 00:37:04.999 "allow_unrecognized_csi": false, 00:37:04.999 "method": "bdev_nvme_attach_controller", 00:37:04.999 "req_id": 1 00:37:04.999 } 00:37:04.999 Got JSON-RPC error response 00:37:04.999 response: 00:37:04.999 { 00:37:04.999 "code": -5, 00:37:04.999 "message": "Input/output error" 00:37:04.999 } 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.999 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.259 request: 00:37:05.259 { 00:37:05.259 "name": "nvme0", 00:37:05.259 "trtype": "rdma", 00:37:05.259 "traddr": "192.168.100.8", 00:37:05.259 "adrfam": "ipv4", 00:37:05.259 "trsvcid": "4420", 00:37:05.259 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:05.259 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:05.259 "prchk_reftag": false, 00:37:05.259 "prchk_guard": false, 00:37:05.259 "hdgst": false, 00:37:05.259 "ddgst": false, 00:37:05.259 "dhchap_key": "key1", 00:37:05.259 "dhchap_ctrlr_key": "ckey2", 00:37:05.259 "allow_unrecognized_csi": false, 00:37:05.259 "method": "bdev_nvme_attach_controller", 00:37:05.259 "req_id": 1 00:37:05.259 } 00:37:05.259 Got JSON-RPC error response 00:37:05.259 response: 00:37:05.259 { 00:37:05.259 "code": -5, 00:37:05.259 "message": "Input/output error" 00:37:05.259 } 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.259 14:08:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.259 nvme0n1 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.259 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.517 request: 00:37:05.517 { 00:37:05.517 "name": "nvme0", 00:37:05.517 "dhchap_key": "key1", 00:37:05.517 "dhchap_ctrlr_key": "ckey2", 00:37:05.517 "method": "bdev_nvme_set_keys", 00:37:05.517 "req_id": 1 00:37:05.517 } 00:37:05.517 Got JSON-RPC error response 00:37:05.517 response: 00:37:05.517 { 00:37:05.517 "code": -13, 00:37:05.517 "message": "Permission denied" 00:37:05.517 } 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:05.517 14:08:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:06.449 14:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.449 14:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:06.449 14:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.449 14:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.449 14:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.449 14:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:06.449 14:08:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:07.823 14:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.823 14:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:07.823 14:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.823 14:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.823 14:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.823 14:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:07.823 14:08:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UwNmY2ZGFjZDY1NTEwNGU3YjRiMjkzNWFlYTY1MzYyNzM1MTM3Mzc3NmM3ZGY3oPZozQ==: 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: ]] 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQ5ZjY4NjlkMDcyNmIyZGE3NjZhZmEwNzMyMTA1YzdmNDAwZDg0NzEwMDQyNzUy92f4Lg==: 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.760 nvme0n1 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWNhODY3NDMyOTlmNjZkYmZiNjU4OTQ0YTljNWMxYWKXTIz0: 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: ]] 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzgxOGQzN2RkNjFjYzE3MTRiMTI1NGE5NDBkZWU2MTPqGmQM: 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.760 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.019 request: 00:37:09.019 { 00:37:09.019 "name": "nvme0", 00:37:09.019 "dhchap_key": "key2", 00:37:09.019 "dhchap_ctrlr_key": "ckey1", 00:37:09.019 "method": "bdev_nvme_set_keys", 00:37:09.019 "req_id": 1 00:37:09.019 } 00:37:09.019 Got JSON-RPC error response 00:37:09.019 response: 00:37:09.019 { 00:37:09.019 "code": -13, 00:37:09.019 "message": "Permission denied" 00:37:09.019 } 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:09.019 14:08:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:09.953 14:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.954 14:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:09.954 14:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.954 14:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.954 14:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.954 14:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:09.954 14:08:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:10.884 14:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.884 14:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:10.884 14:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.884 14:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.141 14:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.141 14:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:11.141 14:08:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:37:12.137 rmmod nvme_rdma 00:37:12.137 rmmod nvme_fabrics 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1938003 ']' 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1938003 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1938003 ']' 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1938003 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1938003 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1938003' 00:37:12.137 killing process with pid 1938003 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1938003 00:37:12.137 14:08:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1938003 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:37:12.395 14:08:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:37:15.682 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:15.682 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:18.974 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:37:20.351 14:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.EiG /tmp/spdk.key-null.9v3 /tmp/spdk.key-sha256.AvQ /tmp/spdk.key-sha384.PiR /tmp/spdk.key-sha512.YLr /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:37:20.351 14:08:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:37:22.888 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:22.888 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:22.888 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:22.888 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:22.888 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:23.148 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:24.524 00:37:24.524 real 1m3.189s 00:37:24.524 user 0m50.406s 00:37:24.524 sys 0m15.737s 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.524 ************************************ 00:37:24.524 END TEST nvmf_auth_host 00:37:24.524 ************************************ 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.524 ************************************ 00:37:24.524 START TEST nvmf_bdevperf 00:37:24.524 ************************************ 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:37:24.524 * Looking for test storage... 00:37:24.524 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:24.524 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:24.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.783 --rc genhtml_branch_coverage=1 00:37:24.783 --rc genhtml_function_coverage=1 00:37:24.783 --rc genhtml_legend=1 00:37:24.783 --rc geninfo_all_blocks=1 00:37:24.783 --rc geninfo_unexecuted_blocks=1 00:37:24.783 00:37:24.783 ' 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:24.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.783 --rc genhtml_branch_coverage=1 00:37:24.783 --rc genhtml_function_coverage=1 00:37:24.783 --rc genhtml_legend=1 00:37:24.783 --rc geninfo_all_blocks=1 00:37:24.783 --rc geninfo_unexecuted_blocks=1 00:37:24.783 00:37:24.783 ' 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:24.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.783 --rc genhtml_branch_coverage=1 00:37:24.783 --rc genhtml_function_coverage=1 00:37:24.783 --rc genhtml_legend=1 00:37:24.783 --rc geninfo_all_blocks=1 00:37:24.783 --rc geninfo_unexecuted_blocks=1 00:37:24.783 00:37:24.783 ' 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:24.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.783 --rc genhtml_branch_coverage=1 00:37:24.783 --rc genhtml_function_coverage=1 00:37:24.783 --rc genhtml_legend=1 00:37:24.783 --rc geninfo_all_blocks=1 00:37:24.783 --rc geninfo_unexecuted_blocks=1 00:37:24.783 00:37:24.783 ' 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:24.783 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:24.784 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:24.784 14:08:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.351 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:37:31.352 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:37:31.352 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:37:31.352 Found net devices under 0000:18:00.0: mlx_0_0 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:37:31.352 Found net devices under 0000:18:00.1: mlx_0_1 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:37:31.352 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:31.352 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:37:31.352 altname enp24s0f0np0 00:37:31.352 altname ens785f0np0 00:37:31.352 inet 192.168.100.8/24 scope global mlx_0_0 00:37:31.352 valid_lft forever preferred_lft forever 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:37:31.352 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:37:31.353 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:31.353 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:37:31.353 altname enp24s0f1np1 00:37:31.353 altname ens785f1np1 00:37:31.353 inet 192.168.100.9/24 scope global mlx_0_1 00:37:31.353 valid_lft forever preferred_lft forever 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:37:31.353 192.168.100.9' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:37:31.353 192.168.100.9' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:37:31.353 192.168.100.9' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1953869 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1953869 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1953869 ']' 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.353 [2024-12-05 14:08:30.488506] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:37:31.353 [2024-12-05 14:08:30.488562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:31.353 [2024-12-05 14:08:30.565181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:31.353 [2024-12-05 14:08:30.587694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:31.353 [2024-12-05 14:08:30.587729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:31.353 [2024-12-05 14:08:30.587736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:31.353 [2024-12-05 14:08:30.587742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:31.353 [2024-12-05 14:08:30.587746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:31.353 [2024-12-05 14:08:30.588966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:31.353 [2024-12-05 14:08:30.589073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:31.353 [2024-12-05 14:08:30.589079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.353 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.353 [2024-12-05 14:08:30.746883] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5a16c0/0x5a5bb0) succeed. 00:37:31.353 [2024-12-05 14:08:30.754946] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5a2cb0/0x5e7250) succeed. 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.354 Malloc0 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.354 [2024-12-05 14:08:30.891396] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:31.354 { 00:37:31.354 "params": { 00:37:31.354 "name": "Nvme$subsystem", 00:37:31.354 "trtype": "$TEST_TRANSPORT", 00:37:31.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:31.354 "adrfam": "ipv4", 00:37:31.354 "trsvcid": "$NVMF_PORT", 00:37:31.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:31.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:31.354 "hdgst": ${hdgst:-false}, 00:37:31.354 "ddgst": ${ddgst:-false} 00:37:31.354 }, 00:37:31.354 "method": "bdev_nvme_attach_controller" 00:37:31.354 } 00:37:31.354 EOF 00:37:31.354 )") 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:31.354 14:08:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:31.354 "params": { 00:37:31.354 "name": "Nvme1", 00:37:31.354 "trtype": "rdma", 00:37:31.354 "traddr": "192.168.100.8", 00:37:31.354 "adrfam": "ipv4", 00:37:31.354 "trsvcid": "4420", 00:37:31.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:31.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:31.354 "hdgst": false, 00:37:31.354 "ddgst": false 00:37:31.354 }, 00:37:31.354 "method": "bdev_nvme_attach_controller" 00:37:31.354 }' 00:37:31.354 [2024-12-05 14:08:30.932186] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:37:31.354 [2024-12-05 14:08:30.932225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953901 ] 00:37:31.354 [2024-12-05 14:08:31.007551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.354 [2024-12-05 14:08:31.028830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:31.354 Running I/O for 1 seconds... 00:37:32.732 19298.00 IOPS, 75.38 MiB/s 00:37:32.733 Latency(us) 00:37:32.733 [2024-12-05T13:08:32.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.733 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:32.733 Verification LBA range: start 0x0 length 0x4000 00:37:32.733 Nvme1n1 : 1.01 19310.65 75.43 0.00 0.00 6594.25 2463.67 11505.21 00:37:32.733 [2024-12-05T13:08:32.586Z] =================================================================================================================== 00:37:32.733 [2024-12-05T13:08:32.586Z] Total : 19310.65 75.43 0.00 0.00 6594.25 2463.67 11505.21 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1954163 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:32.733 { 00:37:32.733 "params": { 00:37:32.733 "name": "Nvme$subsystem", 00:37:32.733 "trtype": "$TEST_TRANSPORT", 00:37:32.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:32.733 "adrfam": "ipv4", 00:37:32.733 "trsvcid": "$NVMF_PORT", 00:37:32.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:32.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:32.733 "hdgst": ${hdgst:-false}, 00:37:32.733 "ddgst": ${ddgst:-false} 00:37:32.733 }, 00:37:32.733 "method": "bdev_nvme_attach_controller" 00:37:32.733 } 00:37:32.733 EOF 00:37:32.733 )") 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:32.733 14:08:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:32.733 "params": { 00:37:32.733 "name": "Nvme1", 00:37:32.733 "trtype": "rdma", 00:37:32.733 "traddr": "192.168.100.8", 00:37:32.733 "adrfam": "ipv4", 00:37:32.733 "trsvcid": "4420", 00:37:32.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:32.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:32.733 "hdgst": false, 00:37:32.733 "ddgst": false 00:37:32.733 }, 00:37:32.733 "method": "bdev_nvme_attach_controller" 00:37:32.733 }' 00:37:32.733 [2024-12-05 14:08:32.427554] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:37:32.733 [2024-12-05 14:08:32.427598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954163 ] 00:37:32.733 [2024-12-05 14:08:32.499689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.733 [2024-12-05 14:08:32.518532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.992 Running I/O for 15 seconds... 00:37:34.863 19202.00 IOPS, 75.01 MiB/s [2024-12-05T13:08:35.653Z] 19328.00 IOPS, 75.50 MiB/s [2024-12-05T13:08:35.653Z] 14:08:35 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1953869 00:37:35.800 14:08:35 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:36.626 17408.00 IOPS, 68.00 MiB/s [2024-12-05T13:08:36.479Z] [2024-12-05 14:08:36.415360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.626 [2024-12-05 14:08:36.415538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.626 [2024-12-05 14:08:36.415573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.626 [2024-12-05 14:08:36.415580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.626 [2024-12-05 14:08:36.415589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.626 [2024-12-05 14:08:36.415596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.626 [2024-12-05 14:08:36.415603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.626 [2024-12-05 14:08:36.415609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.626 [2024-12-05 14:08:36.415616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.626 [2024-12-05 14:08:36.415622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.626 [2024-12-05 14:08:36.415629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.626 [2024-12-05 14:08:36.415638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.626 [2024-12-05 14:08:36.415645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.626 [2024-12-05 14:08:36.415651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.626 [2024-12-05 14:08:36.415658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.415986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.415995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.627 [2024-12-05 14:08:36.416150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.627 [2024-12-05 14:08:36.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.628 [2024-12-05 14:08:36.416328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.628 [2024-12-05 14:08:36.416642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x180f00 00:37:36.628 [2024-12-05 14:08:36.416647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.416989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.416995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180f00 00:37:36.629 [2024-12-05 14:08:36.417115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.629 [2024-12-05 14:08:36.417123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180f00 00:37:36.630 [2024-12-05 14:08:36.417128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.417135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180f00 00:37:36.630 [2024-12-05 14:08:36.417141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.417148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180f00 00:37:36.630 [2024-12-05 14:08:36.417153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.417161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180f00 00:37:36.630 [2024-12-05 14:08:36.417169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.417177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180f00 00:37:36.630 [2024-12-05 14:08:36.417183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.417190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180f00 00:37:36.630 [2024-12-05 14:08:36.417196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.417203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180f00 00:37:36.630 [2024-12-05 14:08:36.417208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:29465000 sqhd:7210 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.428779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:36.630 [2024-12-05 14:08:36.428792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:36.630 [2024-12-05 14:08:36.428800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25112 len:8 PRP1 0x0 PRP2 0x0 00:37:36.630 [2024-12-05 14:08:36.428807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.428866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:36.630 [2024-12-05 14:08:36.428874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:1d65a20 sqhd:1c40 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.428882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:36.630 [2024-12-05 14:08:36.428888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:1d65a20 sqhd:1c40 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.428895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:36.630 [2024-12-05 14:08:36.428900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:1d65a20 sqhd:1c40 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.428906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:36.630 [2024-12-05 14:08:36.428913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:1d65a20 sqhd:1c40 p:0 m:0 dnr:0 00:37:36.630 [2024-12-05 14:08:36.444832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:36.630 [2024-12-05 14:08:36.444849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:36.630 [2024-12-05 14:08:36.444857] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:37:36.630 [2024-12-05 14:08:36.447454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:36.630 [2024-12-05 14:08:36.450032] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:36.630 [2024-12-05 14:08:36.450048] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:36.630 [2024-12-05 14:08:36.450057] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:37:37.823 13056.00 IOPS, 51.00 MiB/s [2024-12-05T13:08:37.676Z] [2024-12-05 14:08:37.453914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:37.823 [2024-12-05 14:08:37.453967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:37.823 [2024-12-05 14:08:37.454575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:37.823 [2024-12-05 14:08:37.454584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:37.823 [2024-12-05 14:08:37.454592] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:37.823 [2024-12-05 14:08:37.454601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:37.823 [2024-12-05 14:08:37.464383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:37.823 [2024-12-05 14:08:37.467942] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:37.823 [2024-12-05 14:08:37.467967] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:37.823 [2024-12-05 14:08:37.467976] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:37:38.649 10444.80 IOPS, 40.80 MiB/s [2024-12-05T13:08:38.502Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1953869 Killed "${NVMF_APP[@]}" "$@" 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1955216 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1955216 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1955216 ']' 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:38.649 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:38.649 [2024-12-05 14:08:38.447133] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:37:38.649 [2024-12-05 14:08:38.447175] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:38.649 [2024-12-05 14:08:38.471731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:38.649 [2024-12-05 14:08:38.471756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.649 [2024-12-05 14:08:38.471920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.649 [2024-12-05 14:08:38.471931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.649 [2024-12-05 14:08:38.471939] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:38.649 [2024-12-05 14:08:38.471947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.649 [2024-12-05 14:08:38.476771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.649 [2024-12-05 14:08:38.479213] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:38.649 [2024-12-05 14:08:38.479231] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:38.649 [2024-12-05 14:08:38.479238] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:37:38.909 [2024-12-05 14:08:38.523301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:38.909 [2024-12-05 14:08:38.544320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:38.909 [2024-12-05 14:08:38.544357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:38.909 [2024-12-05 14:08:38.544366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:38.909 [2024-12-05 14:08:38.544372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:38.909 [2024-12-05 14:08:38.544415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:38.909 [2024-12-05 14:08:38.545515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:38.909 [2024-12-05 14:08:38.545643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.909 [2024-12-05 14:08:38.545644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.909 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:38.909 [2024-12-05 14:08:38.696273] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7996c0/0x79dbb0) succeed. 00:37:38.909 8704.00 IOPS, 34.00 MiB/s [2024-12-05T13:08:38.762Z] [2024-12-05 14:08:38.704621] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x79acb0/0x7df250) succeed. 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:39.168 Malloc0 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:39.168 [2024-12-05 14:08:38.843187] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.168 14:08:38 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1954163 00:37:39.787 [2024-12-05 14:08:39.483114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:37:39.787 [2024-12-05 14:08:39.483141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.787 [2024-12-05 14:08:39.483304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.787 [2024-12-05 14:08:39.483312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.787 [2024-12-05 14:08:39.483319] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:37:39.787 [2024-12-05 14:08:39.483329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.787 [2024-12-05 14:08:39.491573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.787 [2024-12-05 14:08:39.533226] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:41.011 7920.14 IOPS, 30.94 MiB/s [2024-12-05T13:08:41.802Z] 9331.88 IOPS, 36.45 MiB/s [2024-12-05T13:08:42.739Z] 10432.33 IOPS, 40.75 MiB/s [2024-12-05T13:08:44.118Z] 11311.50 IOPS, 44.19 MiB/s [2024-12-05T13:08:45.056Z] 12033.73 IOPS, 47.01 MiB/s [2024-12-05T13:08:45.990Z] 12634.92 IOPS, 49.36 MiB/s [2024-12-05T13:08:46.929Z] 13145.00 IOPS, 51.35 MiB/s [2024-12-05T13:08:47.867Z] 13580.93 IOPS, 53.05 MiB/s [2024-12-05T13:08:47.867Z] 13957.87 IOPS, 54.52 MiB/s 00:37:48.014 Latency(us) 00:37:48.014 [2024-12-05T13:08:47.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.014 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:48.014 Verification LBA range: start 0x0 length 0x4000 00:37:48.014 Nvme1n1 : 15.00 13960.62 54.53 11205.09 0.00 5067.17 335.27 1056343.23 00:37:48.014 [2024-12-05T13:08:47.867Z] =================================================================================================================== 00:37:48.014 [2024-12-05T13:08:47.867Z] Total : 13960.62 54.53 11205.09 0.00 5067.17 335.27 1056343.23 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:37:48.274 rmmod nvme_rdma 00:37:48.274 rmmod nvme_fabrics 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1955216 ']' 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1955216 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1955216 ']' 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1955216 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.274 14:08:47 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1955216 00:37:48.274 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:48.274 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:48.274 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1955216' 00:37:48.274 killing process with pid 1955216 00:37:48.274 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1955216 00:37:48.274 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1955216 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:37:48.534 00:37:48.534 real 0m23.980s 00:37:48.534 user 1m1.698s 00:37:48.534 sys 0m5.606s 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.534 ************************************ 00:37:48.534 END TEST nvmf_bdevperf 00:37:48.534 ************************************ 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.534 ************************************ 00:37:48.534 START TEST nvmf_target_disconnect 00:37:48.534 ************************************ 00:37:48.534 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:37:48.794 * Looking for test storage... 00:37:48.794 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:48.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.794 --rc genhtml_branch_coverage=1 00:37:48.794 --rc genhtml_function_coverage=1 00:37:48.794 --rc genhtml_legend=1 00:37:48.794 --rc geninfo_all_blocks=1 00:37:48.794 --rc geninfo_unexecuted_blocks=1 00:37:48.794 00:37:48.794 ' 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:48.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.794 --rc genhtml_branch_coverage=1 00:37:48.794 --rc genhtml_function_coverage=1 00:37:48.794 --rc genhtml_legend=1 00:37:48.794 --rc geninfo_all_blocks=1 00:37:48.794 --rc geninfo_unexecuted_blocks=1 00:37:48.794 00:37:48.794 ' 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:48.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.794 --rc genhtml_branch_coverage=1 00:37:48.794 --rc genhtml_function_coverage=1 00:37:48.794 --rc genhtml_legend=1 00:37:48.794 --rc geninfo_all_blocks=1 00:37:48.794 --rc geninfo_unexecuted_blocks=1 00:37:48.794 00:37:48.794 ' 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:48.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.794 --rc genhtml_branch_coverage=1 00:37:48.794 --rc genhtml_function_coverage=1 00:37:48.794 --rc genhtml_legend=1 00:37:48.794 --rc geninfo_all_blocks=1 00:37:48.794 --rc geninfo_unexecuted_blocks=1 00:37:48.794 00:37:48.794 ' 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:37:48.794 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:48.795 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:48.795 14:08:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:37:55.369 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:37:55.369 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:37:55.369 Found net devices under 0000:18:00.0: mlx_0_0 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:37:55.369 Found net devices under 0000:18:00.1: mlx_0_1 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.369 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:37:55.370 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:55.370 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:37:55.370 altname enp24s0f0np0 00:37:55.370 altname ens785f0np0 00:37:55.370 inet 192.168.100.8/24 scope global mlx_0_0 00:37:55.370 valid_lft forever preferred_lft forever 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:37:55.370 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:55.370 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:37:55.370 altname enp24s0f1np1 00:37:55.370 altname ens785f1np1 00:37:55.370 inet 192.168.100.9/24 scope global mlx_0_1 00:37:55.370 valid_lft forever preferred_lft forever 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:37:55.370 192.168.100.9' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:37:55.370 192.168.100.9' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:37:55.370 192.168.100.9' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:55.370 ************************************ 00:37:55.370 START TEST nvmf_target_disconnect_tc1 00:37:55.370 ************************************ 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:55.370 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:37:55.371 14:08:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:55.371 [2024-12-05 14:08:54.742431] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:55.371 [2024-12-05 14:08:54.742501] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:55.371 [2024-12-05 14:08:54.742520] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:37:55.937 [2024-12-05 14:08:55.746481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:37:55.937 [2024-12-05 14:08:55.746543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:37:55.937 [2024-12-05 14:08:55.746570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:37:55.937 [2024-12-05 14:08:55.746621] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:55.937 [2024-12-05 14:08:55.746645] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:55.937 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:37:55.937 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:55.937 Initializing NVMe Controllers 00:37:55.937 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:55.937 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:55.937 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:55.937 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:55.937 00:37:55.937 real 0m1.142s 00:37:55.937 user 0m0.943s 00:37:55.937 sys 0m0.187s 00:37:55.937 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.937 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:55.937 ************************************ 00:37:55.937 END TEST nvmf_target_disconnect_tc1 00:37:55.937 ************************************ 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:56.196 ************************************ 00:37:56.196 START TEST nvmf_target_disconnect_tc2 00:37:56.196 ************************************ 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1960545 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1960545 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1960545 ']' 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.196 14:08:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.196 [2024-12-05 14:08:55.883721] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:37:56.196 [2024-12-05 14:08:55.883763] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.196 [2024-12-05 14:08:55.958301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:56.196 [2024-12-05 14:08:55.980078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.196 [2024-12-05 14:08:55.980116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.196 [2024-12-05 14:08:55.980123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.196 [2024-12-05 14:08:55.980129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.196 [2024-12-05 14:08:55.980133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.196 [2024-12-05 14:08:55.981439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:56.196 [2024-12-05 14:08:55.981528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:56.196 [2024-12-05 14:08:55.981636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:56.196 [2024-12-05 14:08:55.981637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:56.461 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:56.461 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.462 Malloc0 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.462 [2024-12-05 14:08:56.157730] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x240be20/0x2418600) succeed. 00:37:56.462 [2024-12-05 14:08:56.166182] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x240d4b0/0x2459ca0) succeed. 00:37:56.462 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.463 [2024-12-05 14:08:56.297087] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1960626 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:56.463 14:08:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:59.003 14:08:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1960545 00:37:59.003 14:08:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Write completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 Read completed with error (sct=0, sc=8) 00:37:59.938 starting I/O failed 00:37:59.938 [2024-12-05 14:08:59.491433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:00.506 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1960545 Killed "${NVMF_APP[@]}" "$@" 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1961340 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1961340 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1961340 ']' 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.506 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:00.765 [2024-12-05 14:09:00.372219] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:38:00.765 [2024-12-05 14:09:00.372267] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:00.765 [2024-12-05 14:09:00.449580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:00.765 [2024-12-05 14:09:00.470506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:00.765 [2024-12-05 14:09:00.470550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:00.765 [2024-12-05 14:09:00.470557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:00.765 [2024-12-05 14:09:00.470563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:00.765 [2024-12-05 14:09:00.470568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:00.765 [2024-12-05 14:09:00.472011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:00.765 [2024-12-05 14:09:00.472122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:00.765 [2024-12-05 14:09:00.472228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:00.765 [2024-12-05 14:09:00.472229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:00.765 Read completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Read completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Read completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Read completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Read completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Read completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Read completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Read completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.765 Write completed with error (sct=0, sc=8) 00:38:00.765 starting I/O failed 00:38:00.766 Read completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Write completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Write completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Read completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Read completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Read completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Write completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Write completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Write completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 Write completed with error (sct=0, sc=8) 00:38:00.766 starting I/O failed 00:38:00.766 [2024-12-05 14:09:00.496456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.766 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:01.023 Malloc0 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:01.023 [2024-12-05 14:09:00.664480] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13f4e20/0x1401600) succeed. 00:38:01.023 [2024-12-05 14:09:00.673270] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13f64b0/0x1442ca0) succeed. 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:01.023 [2024-12-05 14:09:00.806219] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:38:01.023 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.024 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:01.024 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.024 14:09:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1960626 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Write completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.957 Read completed with error (sct=0, sc=8) 00:38:01.957 starting I/O failed 00:38:01.958 Write completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Read completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Write completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Read completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Write completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Write completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Read completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Write completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Read completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Read completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 Write completed with error (sct=0, sc=8) 00:38:01.958 starting I/O failed 00:38:01.958 [2024-12-05 14:09:01.501379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 [2024-12-05 14:09:01.506213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.506267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.958 [2024-12-05 14:09:01.506285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.958 [2024-12-05 14:09:01.506297] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.958 [2024-12-05 14:09:01.506303] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.958 [2024-12-05 14:09:01.516161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 qpair failed and we were unable to recover it. 00:38:01.958 [2024-12-05 14:09:01.526084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.526123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.958 [2024-12-05 14:09:01.526139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.958 [2024-12-05 14:09:01.526146] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.958 [2024-12-05 14:09:01.526152] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.958 [2024-12-05 14:09:01.536246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 qpair failed and we were unable to recover it. 00:38:01.958 [2024-12-05 14:09:01.546047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.546089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.958 [2024-12-05 14:09:01.546105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.958 [2024-12-05 14:09:01.546112] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.958 [2024-12-05 14:09:01.546117] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.958 [2024-12-05 14:09:01.556379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 qpair failed and we were unable to recover it. 00:38:01.958 [2024-12-05 14:09:01.566120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.566160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.958 [2024-12-05 14:09:01.566176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.958 [2024-12-05 14:09:01.566182] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.958 [2024-12-05 14:09:01.566188] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.958 [2024-12-05 14:09:01.576437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 qpair failed and we were unable to recover it. 00:38:01.958 [2024-12-05 14:09:01.586273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.586313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.958 [2024-12-05 14:09:01.586328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.958 [2024-12-05 14:09:01.586334] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.958 [2024-12-05 14:09:01.586340] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.958 [2024-12-05 14:09:01.596563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 qpair failed and we were unable to recover it. 00:38:01.958 [2024-12-05 14:09:01.606210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.606244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.958 [2024-12-05 14:09:01.606259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.958 [2024-12-05 14:09:01.606266] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.958 [2024-12-05 14:09:01.606271] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.958 [2024-12-05 14:09:01.616468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 qpair failed and we were unable to recover it. 00:38:01.958 [2024-12-05 14:09:01.626291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.626332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.958 [2024-12-05 14:09:01.626347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.958 [2024-12-05 14:09:01.626353] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.958 [2024-12-05 14:09:01.626359] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.958 [2024-12-05 14:09:01.636619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 qpair failed and we were unable to recover it. 00:38:01.958 [2024-12-05 14:09:01.646360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.646405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.958 [2024-12-05 14:09:01.646421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.958 [2024-12-05 14:09:01.646427] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.958 [2024-12-05 14:09:01.646433] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.958 [2024-12-05 14:09:01.656599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.958 qpair failed and we were unable to recover it. 00:38:01.958 [2024-12-05 14:09:01.666423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.958 [2024-12-05 14:09:01.666458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.959 [2024-12-05 14:09:01.666473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.959 [2024-12-05 14:09:01.666480] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.959 [2024-12-05 14:09:01.666485] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.959 [2024-12-05 14:09:01.676579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.959 qpair failed and we were unable to recover it. 00:38:01.959 [2024-12-05 14:09:01.686479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.959 [2024-12-05 14:09:01.686520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.959 [2024-12-05 14:09:01.686535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.959 [2024-12-05 14:09:01.686541] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.959 [2024-12-05 14:09:01.686547] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.959 [2024-12-05 14:09:01.696610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.959 qpair failed and we were unable to recover it. 00:38:01.959 [2024-12-05 14:09:01.706508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.959 [2024-12-05 14:09:01.706546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.959 [2024-12-05 14:09:01.706561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.959 [2024-12-05 14:09:01.706568] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.959 [2024-12-05 14:09:01.706573] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.959 [2024-12-05 14:09:01.716903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.959 qpair failed and we were unable to recover it. 00:38:01.959 [2024-12-05 14:09:01.726608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.959 [2024-12-05 14:09:01.726646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.959 [2024-12-05 14:09:01.726662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.959 [2024-12-05 14:09:01.726668] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.959 [2024-12-05 14:09:01.726674] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.959 [2024-12-05 14:09:01.736625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.959 qpair failed and we were unable to recover it. 00:38:01.959 [2024-12-05 14:09:01.746567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.959 [2024-12-05 14:09:01.746606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.959 [2024-12-05 14:09:01.746622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.959 [2024-12-05 14:09:01.746629] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.959 [2024-12-05 14:09:01.746634] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.959 [2024-12-05 14:09:01.756743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.959 qpair failed and we were unable to recover it. 00:38:01.959 [2024-12-05 14:09:01.766695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.959 [2024-12-05 14:09:01.766734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.959 [2024-12-05 14:09:01.766753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.959 [2024-12-05 14:09:01.766759] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.959 [2024-12-05 14:09:01.766765] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.959 [2024-12-05 14:09:01.776986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.959 qpair failed and we were unable to recover it. 00:38:01.959 [2024-12-05 14:09:01.786740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.959 [2024-12-05 14:09:01.786776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.959 [2024-12-05 14:09:01.786791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.959 [2024-12-05 14:09:01.786798] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.959 [2024-12-05 14:09:01.786803] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:01.959 [2024-12-05 14:09:01.797010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:01.959 qpair failed and we were unable to recover it. 00:38:01.959 [2024-12-05 14:09:01.806803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:01.959 [2024-12-05 14:09:01.806841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:01.959 [2024-12-05 14:09:01.806857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:01.959 [2024-12-05 14:09:01.806863] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:01.959 [2024-12-05 14:09:01.806868] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.218 [2024-12-05 14:09:01.816953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.218 qpair failed and we were unable to recover it. 00:38:02.218 [2024-12-05 14:09:01.826598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.218 [2024-12-05 14:09:01.826642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.218 [2024-12-05 14:09:01.826656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.218 [2024-12-05 14:09:01.826663] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.218 [2024-12-05 14:09:01.826669] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.218 [2024-12-05 14:09:01.837165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.218 qpair failed and we were unable to recover it. 00:38:02.218 [2024-12-05 14:09:01.846869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.218 [2024-12-05 14:09:01.846901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.218 [2024-12-05 14:09:01.846917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.218 [2024-12-05 14:09:01.846926] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.218 [2024-12-05 14:09:01.846932] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.218 [2024-12-05 14:09:01.857235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.218 qpair failed and we were unable to recover it. 00:38:02.218 [2024-12-05 14:09:01.866930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.218 [2024-12-05 14:09:01.866965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.218 [2024-12-05 14:09:01.866980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.218 [2024-12-05 14:09:01.866986] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.218 [2024-12-05 14:09:01.866991] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.218 [2024-12-05 14:09:01.877194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.218 qpair failed and we were unable to recover it. 00:38:02.218 [2024-12-05 14:09:01.886923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.218 [2024-12-05 14:09:01.886963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.218 [2024-12-05 14:09:01.886978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.218 [2024-12-05 14:09:01.886984] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.218 [2024-12-05 14:09:01.886990] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.218 [2024-12-05 14:09:01.897290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.218 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:01.907004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:01.907045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:01.907059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:01.907066] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:01.907071] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.219 [2024-12-05 14:09:01.917336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.219 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:01.927075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:01.927111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:01.927126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:01.927133] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:01.927138] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.219 [2024-12-05 14:09:01.937359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.219 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:01.947050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:01.947085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:01.947100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:01.947107] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:01.947112] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.219 [2024-12-05 14:09:01.957208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.219 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:01.967157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:01.967200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:01.967216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:01.967222] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:01.967228] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.219 [2024-12-05 14:09:01.977525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.219 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:01.987130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:01.987166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:01.987181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:01.987187] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:01.987193] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.219 [2024-12-05 14:09:01.997555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.219 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:02.007220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:02.007258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:02.007272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:02.007279] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:02.007284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.219 [2024-12-05 14:09:02.017655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.219 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:02.027465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:02.027506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:02.027520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:02.027527] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:02.027533] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.219 [2024-12-05 14:09:02.037610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.219 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:02.047414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:02.047451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:02.047466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:02.047473] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:02.047478] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.219 [2024-12-05 14:09:02.057654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.219 qpair failed and we were unable to recover it. 00:38:02.219 [2024-12-05 14:09:02.067532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.219 [2024-12-05 14:09:02.067571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.219 [2024-12-05 14:09:02.067586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.219 [2024-12-05 14:09:02.067592] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.219 [2024-12-05 14:09:02.067597] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.077737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.087468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.087504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.087519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.087525] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.087530] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.097824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.107569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.107602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.107620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.107627] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.107632] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.117989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.127538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.127577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.127592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.127599] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.127605] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.137894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.147692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.147729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.147744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.147751] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.147756] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.157887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.167742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.167778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.167794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.167800] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.167805] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.177981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.187767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.187801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.187816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.187823] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.187831] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.198015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.207849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.207886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.207901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.207907] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.207912] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.217992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.227914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.227956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.227971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.227978] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.227984] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.238221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.247979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.248017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.248032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.248039] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.248044] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.258253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.268029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.268062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.268076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.268083] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.268088] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.278297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.288078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.288115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.288130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.288137] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.478 [2024-12-05 14:09:02.288143] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.478 [2024-12-05 14:09:02.298458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.478 qpair failed and we were unable to recover it. 00:38:02.478 [2024-12-05 14:09:02.308064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.478 [2024-12-05 14:09:02.308099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.478 [2024-12-05 14:09:02.308114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.478 [2024-12-05 14:09:02.308120] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.479 [2024-12-05 14:09:02.308126] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.479 [2024-12-05 14:09:02.318465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.479 qpair failed and we were unable to recover it. 00:38:02.479 [2024-12-05 14:09:02.328149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.479 [2024-12-05 14:09:02.328185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.479 [2024-12-05 14:09:02.328200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.479 [2024-12-05 14:09:02.328207] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.479 [2024-12-05 14:09:02.328212] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.737 [2024-12-05 14:09:02.338454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.737 qpair failed and we were unable to recover it. 00:38:02.737 [2024-12-05 14:09:02.348213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.737 [2024-12-05 14:09:02.348251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.737 [2024-12-05 14:09:02.348266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.737 [2024-12-05 14:09:02.348272] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.737 [2024-12-05 14:09:02.348278] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.737 [2024-12-05 14:09:02.358553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.737 qpair failed and we were unable to recover it. 00:38:02.737 [2024-12-05 14:09:02.368312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.737 [2024-12-05 14:09:02.368354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.737 [2024-12-05 14:09:02.368368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.737 [2024-12-05 14:09:02.368382] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.737 [2024-12-05 14:09:02.368388] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.737 [2024-12-05 14:09:02.378617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.737 qpair failed and we were unable to recover it. 00:38:02.737 [2024-12-05 14:09:02.388345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.737 [2024-12-05 14:09:02.388388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.737 [2024-12-05 14:09:02.388403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.737 [2024-12-05 14:09:02.388410] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.737 [2024-12-05 14:09:02.388415] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.737 [2024-12-05 14:09:02.398795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.737 qpair failed and we were unable to recover it. 00:38:02.737 [2024-12-05 14:09:02.408471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.737 [2024-12-05 14:09:02.408506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.737 [2024-12-05 14:09:02.408521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.737 [2024-12-05 14:09:02.408527] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.737 [2024-12-05 14:09:02.408533] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.737 [2024-12-05 14:09:02.418880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.737 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.428561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.428594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.428609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.428615] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.428620] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.738 [2024-12-05 14:09:02.438891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.738 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.448507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.448545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.448563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.448569] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.448575] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.738 [2024-12-05 14:09:02.458812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.738 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.468633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.468668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.468683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.468688] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.468694] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.738 [2024-12-05 14:09:02.478957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.738 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.488644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.488677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.488692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.488698] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.488704] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.738 [2024-12-05 14:09:02.499011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.738 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.508720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.508757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.508772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.508777] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.508783] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.738 [2024-12-05 14:09:02.519033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.738 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.528795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.528833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.528848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.528854] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.528862] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.738 [2024-12-05 14:09:02.538889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.738 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.548723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.548758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.548774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.548780] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.548786] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.738 [2024-12-05 14:09:02.559289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.738 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.568869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.568913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.568928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.568935] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.568941] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.738 [2024-12-05 14:09:02.579369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.738 qpair failed and we were unable to recover it. 00:38:02.738 [2024-12-05 14:09:02.588958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.738 [2024-12-05 14:09:02.588992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.738 [2024-12-05 14:09:02.589007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.738 [2024-12-05 14:09:02.589013] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.738 [2024-12-05 14:09:02.589019] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.996 [2024-12-05 14:09:02.599067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.996 qpair failed and we were unable to recover it. 00:38:02.996 [2024-12-05 14:09:02.608904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.996 [2024-12-05 14:09:02.608942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.996 [2024-12-05 14:09:02.608956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.996 [2024-12-05 14:09:02.608963] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.996 [2024-12-05 14:09:02.608968] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.996 [2024-12-05 14:09:02.619533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.996 qpair failed and we were unable to recover it. 00:38:02.996 [2024-12-05 14:09:02.629058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.996 [2024-12-05 14:09:02.629100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.996 [2024-12-05 14:09:02.629116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.996 [2024-12-05 14:09:02.629122] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.996 [2024-12-05 14:09:02.629128] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.996 [2024-12-05 14:09:02.639432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.996 qpair failed and we were unable to recover it. 00:38:02.996 [2024-12-05 14:09:02.649050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.996 [2024-12-05 14:09:02.649088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.996 [2024-12-05 14:09:02.649104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.996 [2024-12-05 14:09:02.649110] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.996 [2024-12-05 14:09:02.649116] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.996 [2024-12-05 14:09:02.659321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.996 qpair failed and we were unable to recover it. 00:38:02.996 [2024-12-05 14:09:02.669118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.996 [2024-12-05 14:09:02.669154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.996 [2024-12-05 14:09:02.669168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.996 [2024-12-05 14:09:02.669174] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.996 [2024-12-05 14:09:02.669180] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.996 [2024-12-05 14:09:02.679460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.996 qpair failed and we were unable to recover it. 00:38:02.996 [2024-12-05 14:09:02.689174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.996 [2024-12-05 14:09:02.689210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.996 [2024-12-05 14:09:02.689225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.996 [2024-12-05 14:09:02.689231] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.996 [2024-12-05 14:09:02.689236] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.996 [2024-12-05 14:09:02.699594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.996 qpair failed and we were unable to recover it. 00:38:02.996 [2024-12-05 14:09:02.709289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.996 [2024-12-05 14:09:02.709333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.996 [2024-12-05 14:09:02.709348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.996 [2024-12-05 14:09:02.709354] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.996 [2024-12-05 14:09:02.709359] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.996 [2024-12-05 14:09:02.719506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.996 qpair failed and we were unable to recover it. 00:38:02.996 [2024-12-05 14:09:02.729322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.996 [2024-12-05 14:09:02.729358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.996 [2024-12-05 14:09:02.729374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.996 [2024-12-05 14:09:02.729384] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.996 [2024-12-05 14:09:02.729390] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.996 [2024-12-05 14:09:02.739606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.996 qpair failed and we were unable to recover it. 00:38:02.996 [2024-12-05 14:09:02.749412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.996 [2024-12-05 14:09:02.749447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.997 [2024-12-05 14:09:02.749462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.997 [2024-12-05 14:09:02.749468] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.997 [2024-12-05 14:09:02.749473] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.997 [2024-12-05 14:09:02.759649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.997 qpair failed and we were unable to recover it. 00:38:02.997 [2024-12-05 14:09:02.769743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.997 [2024-12-05 14:09:02.769782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.997 [2024-12-05 14:09:02.769796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.997 [2024-12-05 14:09:02.769803] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.997 [2024-12-05 14:09:02.769808] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.997 [2024-12-05 14:09:02.779703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.997 qpair failed and we were unable to recover it. 00:38:02.997 [2024-12-05 14:09:02.789552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.997 [2024-12-05 14:09:02.789596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.997 [2024-12-05 14:09:02.789611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.997 [2024-12-05 14:09:02.789620] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.997 [2024-12-05 14:09:02.789626] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.997 [2024-12-05 14:09:02.799794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.997 qpair failed and we were unable to recover it. 00:38:02.997 [2024-12-05 14:09:02.809506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.997 [2024-12-05 14:09:02.809543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.997 [2024-12-05 14:09:02.809558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.997 [2024-12-05 14:09:02.809564] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.997 [2024-12-05 14:09:02.809570] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.997 [2024-12-05 14:09:02.819933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.997 qpair failed and we were unable to recover it. 00:38:02.997 [2024-12-05 14:09:02.829605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:02.997 [2024-12-05 14:09:02.829642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:02.997 [2024-12-05 14:09:02.829657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:02.997 [2024-12-05 14:09:02.829663] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:02.997 [2024-12-05 14:09:02.829668] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:02.997 [2024-12-05 14:09:02.840000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:02.997 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:02.849683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:02.849721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.256 [2024-12-05 14:09:02.849735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.256 [2024-12-05 14:09:02.849741] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.256 [2024-12-05 14:09:02.849746] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.256 [2024-12-05 14:09:02.859991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.256 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:02.869790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:02.869835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.256 [2024-12-05 14:09:02.869849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.256 [2024-12-05 14:09:02.869856] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.256 [2024-12-05 14:09:02.869861] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.256 [2024-12-05 14:09:02.880005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.256 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:02.889820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:02.889862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.256 [2024-12-05 14:09:02.889877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.256 [2024-12-05 14:09:02.889883] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.256 [2024-12-05 14:09:02.889888] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.256 [2024-12-05 14:09:02.899984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.256 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:02.909950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:02.909990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.256 [2024-12-05 14:09:02.910004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.256 [2024-12-05 14:09:02.910011] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.256 [2024-12-05 14:09:02.910016] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.256 [2024-12-05 14:09:02.920290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.256 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:02.930018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:02.930057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.256 [2024-12-05 14:09:02.930072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.256 [2024-12-05 14:09:02.930078] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.256 [2024-12-05 14:09:02.930084] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.256 [2024-12-05 14:09:02.940205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.256 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:02.949927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:02.949965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.256 [2024-12-05 14:09:02.949980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.256 [2024-12-05 14:09:02.949986] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.256 [2024-12-05 14:09:02.949992] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.256 [2024-12-05 14:09:02.960231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.256 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:02.970217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:02.970250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.256 [2024-12-05 14:09:02.970266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.256 [2024-12-05 14:09:02.970273] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.256 [2024-12-05 14:09:02.970278] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.256 [2024-12-05 14:09:02.980305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.256 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:02.990187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:02.990224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.256 [2024-12-05 14:09:02.990239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.256 [2024-12-05 14:09:02.990246] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.256 [2024-12-05 14:09:02.990251] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.256 [2024-12-05 14:09:03.000388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.256 qpair failed and we were unable to recover it. 00:38:03.256 [2024-12-05 14:09:03.010102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.256 [2024-12-05 14:09:03.010138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.257 [2024-12-05 14:09:03.010153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.257 [2024-12-05 14:09:03.010159] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.257 [2024-12-05 14:09:03.010164] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.257 [2024-12-05 14:09:03.020534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.257 qpair failed and we were unable to recover it. 00:38:03.257 [2024-12-05 14:09:03.030285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.257 [2024-12-05 14:09:03.030325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.257 [2024-12-05 14:09:03.030341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.257 [2024-12-05 14:09:03.030347] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.257 [2024-12-05 14:09:03.030352] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.257 [2024-12-05 14:09:03.040465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.257 qpair failed and we were unable to recover it. 00:38:03.257 [2024-12-05 14:09:03.050223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.257 [2024-12-05 14:09:03.050259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.257 [2024-12-05 14:09:03.050278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.257 [2024-12-05 14:09:03.050284] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.257 [2024-12-05 14:09:03.050291] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.257 [2024-12-05 14:09:03.060610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.257 qpair failed and we were unable to recover it. 00:38:03.257 [2024-12-05 14:09:03.070477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.257 [2024-12-05 14:09:03.070515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.257 [2024-12-05 14:09:03.070530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.257 [2024-12-05 14:09:03.070536] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.257 [2024-12-05 14:09:03.070542] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.257 [2024-12-05 14:09:03.080584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.257 qpair failed and we were unable to recover it. 00:38:03.257 [2024-12-05 14:09:03.090351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.257 [2024-12-05 14:09:03.090396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.257 [2024-12-05 14:09:03.090411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.257 [2024-12-05 14:09:03.090418] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.257 [2024-12-05 14:09:03.090424] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.257 [2024-12-05 14:09:03.100617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.257 qpair failed and we were unable to recover it. 00:38:03.516 [2024-12-05 14:09:03.110467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.516 [2024-12-05 14:09:03.110509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.516 [2024-12-05 14:09:03.110524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.516 [2024-12-05 14:09:03.110530] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.516 [2024-12-05 14:09:03.110535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.516 [2024-12-05 14:09:03.120770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.516 qpair failed and we were unable to recover it. 00:38:03.516 [2024-12-05 14:09:03.130569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.516 [2024-12-05 14:09:03.130607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.516 [2024-12-05 14:09:03.130623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.516 [2024-12-05 14:09:03.130632] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.516 [2024-12-05 14:09:03.130638] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.516 [2024-12-05 14:09:03.140640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.516 qpair failed and we were unable to recover it. 00:38:03.516 [2024-12-05 14:09:03.150648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.516 [2024-12-05 14:09:03.150686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.516 [2024-12-05 14:09:03.150701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.516 [2024-12-05 14:09:03.150707] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.516 [2024-12-05 14:09:03.150713] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.516 [2024-12-05 14:09:03.160922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.516 qpair failed and we were unable to recover it. 00:38:03.516 [2024-12-05 14:09:03.170791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.516 [2024-12-05 14:09:03.170829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.516 [2024-12-05 14:09:03.170844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.516 [2024-12-05 14:09:03.170850] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.516 [2024-12-05 14:09:03.170856] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.516 [2024-12-05 14:09:03.180866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.516 qpair failed and we were unable to recover it. 00:38:03.516 [2024-12-05 14:09:03.190770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.516 [2024-12-05 14:09:03.190804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.516 [2024-12-05 14:09:03.190819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.516 [2024-12-05 14:09:03.190826] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.516 [2024-12-05 14:09:03.190831] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.516 [2024-12-05 14:09:03.201033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.516 qpair failed and we were unable to recover it. 00:38:03.516 [2024-12-05 14:09:03.210927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.517 [2024-12-05 14:09:03.210963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.517 [2024-12-05 14:09:03.210977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.517 [2024-12-05 14:09:03.210983] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.517 [2024-12-05 14:09:03.210989] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.517 [2024-12-05 14:09:03.221206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.517 qpair failed and we were unable to recover it. 00:38:03.517 [2024-12-05 14:09:03.230940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.517 [2024-12-05 14:09:03.230975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.517 [2024-12-05 14:09:03.230991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.517 [2024-12-05 14:09:03.230998] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.517 [2024-12-05 14:09:03.231003] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.517 [2024-12-05 14:09:03.241221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.517 qpair failed and we were unable to recover it. 00:38:03.517 [2024-12-05 14:09:03.250954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.517 [2024-12-05 14:09:03.250994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.517 [2024-12-05 14:09:03.251009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.517 [2024-12-05 14:09:03.251015] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.517 [2024-12-05 14:09:03.251020] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.517 [2024-12-05 14:09:03.261245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.517 qpair failed and we were unable to recover it. 00:38:03.517 [2024-12-05 14:09:03.270940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.517 [2024-12-05 14:09:03.270978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.517 [2024-12-05 14:09:03.270993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.517 [2024-12-05 14:09:03.270999] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.517 [2024-12-05 14:09:03.271005] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.517 [2024-12-05 14:09:03.281181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.517 qpair failed and we were unable to recover it. 00:38:03.517 [2024-12-05 14:09:03.291009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.517 [2024-12-05 14:09:03.291046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.517 [2024-12-05 14:09:03.291061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.517 [2024-12-05 14:09:03.291068] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.517 [2024-12-05 14:09:03.291073] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.517 [2024-12-05 14:09:03.301106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.517 qpair failed and we were unable to recover it. 00:38:03.517 [2024-12-05 14:09:03.311080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.517 [2024-12-05 14:09:03.311118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.517 [2024-12-05 14:09:03.311133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.517 [2024-12-05 14:09:03.311139] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.517 [2024-12-05 14:09:03.311144] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.517 [2024-12-05 14:09:03.321263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.517 qpair failed and we were unable to recover it. 00:38:03.517 [2024-12-05 14:09:03.331279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.517 [2024-12-05 14:09:03.331316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.517 [2024-12-05 14:09:03.331331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.517 [2024-12-05 14:09:03.331338] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.517 [2024-12-05 14:09:03.331343] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.517 [2024-12-05 14:09:03.341432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.517 qpair failed and we were unable to recover it. 00:38:03.517 [2024-12-05 14:09:03.351228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.517 [2024-12-05 14:09:03.351263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.517 [2024-12-05 14:09:03.351278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.517 [2024-12-05 14:09:03.351284] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.517 [2024-12-05 14:09:03.351290] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.517 [2024-12-05 14:09:03.361534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.517 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.371358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.371403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.776 [2024-12-05 14:09:03.371418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.776 [2024-12-05 14:09:03.371424] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.776 [2024-12-05 14:09:03.371430] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.776 [2024-12-05 14:09:03.381561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.776 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.391293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.391330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.776 [2024-12-05 14:09:03.391349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.776 [2024-12-05 14:09:03.391355] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.776 [2024-12-05 14:09:03.391361] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.776 [2024-12-05 14:09:03.401589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.776 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.411387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.411425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.776 [2024-12-05 14:09:03.411440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.776 [2024-12-05 14:09:03.411446] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.776 [2024-12-05 14:09:03.411452] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.776 [2024-12-05 14:09:03.421533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.776 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.431564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.431601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.776 [2024-12-05 14:09:03.431616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.776 [2024-12-05 14:09:03.431622] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.776 [2024-12-05 14:09:03.431628] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.776 [2024-12-05 14:09:03.441686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.776 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.451434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.451474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.776 [2024-12-05 14:09:03.451489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.776 [2024-12-05 14:09:03.451495] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.776 [2024-12-05 14:09:03.451501] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.776 [2024-12-05 14:09:03.461814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.776 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.471690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.471727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.776 [2024-12-05 14:09:03.471742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.776 [2024-12-05 14:09:03.471752] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.776 [2024-12-05 14:09:03.471758] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.776 [2024-12-05 14:09:03.481837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.776 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.491575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.491615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.776 [2024-12-05 14:09:03.491630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.776 [2024-12-05 14:09:03.491636] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.776 [2024-12-05 14:09:03.491642] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.776 [2024-12-05 14:09:03.501873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.776 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.511597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.511638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.776 [2024-12-05 14:09:03.511652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.776 [2024-12-05 14:09:03.511659] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.776 [2024-12-05 14:09:03.511664] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.776 [2024-12-05 14:09:03.521840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.776 qpair failed and we were unable to recover it. 00:38:03.776 [2024-12-05 14:09:03.531665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.776 [2024-12-05 14:09:03.531704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.777 [2024-12-05 14:09:03.531718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.777 [2024-12-05 14:09:03.531724] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.777 [2024-12-05 14:09:03.531730] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.777 [2024-12-05 14:09:03.541867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.777 qpair failed and we were unable to recover it. 00:38:03.777 [2024-12-05 14:09:03.551730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.777 [2024-12-05 14:09:03.551762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.777 [2024-12-05 14:09:03.551777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.777 [2024-12-05 14:09:03.551784] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.777 [2024-12-05 14:09:03.551789] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.777 [2024-12-05 14:09:03.562138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.777 qpair failed and we were unable to recover it. 00:38:03.777 [2024-12-05 14:09:03.571778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.777 [2024-12-05 14:09:03.571816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.777 [2024-12-05 14:09:03.571831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.777 [2024-12-05 14:09:03.571837] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.777 [2024-12-05 14:09:03.571843] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.777 [2024-12-05 14:09:03.582173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.777 qpair failed and we were unable to recover it. 00:38:03.777 [2024-12-05 14:09:03.591870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.777 [2024-12-05 14:09:03.591911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.777 [2024-12-05 14:09:03.591926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.777 [2024-12-05 14:09:03.591932] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.777 [2024-12-05 14:09:03.591937] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.777 [2024-12-05 14:09:03.602158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.777 qpair failed and we were unable to recover it. 00:38:03.777 [2024-12-05 14:09:03.611891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.777 [2024-12-05 14:09:03.611932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.777 [2024-12-05 14:09:03.611946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.777 [2024-12-05 14:09:03.611952] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.777 [2024-12-05 14:09:03.611958] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:03.777 [2024-12-05 14:09:03.622301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:03.777 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.632068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.036 [2024-12-05 14:09:03.632102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.036 [2024-12-05 14:09:03.632116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.036 [2024-12-05 14:09:03.632122] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.036 [2024-12-05 14:09:03.632127] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.036 [2024-12-05 14:09:03.642307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.036 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.652066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.036 [2024-12-05 14:09:03.652105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.036 [2024-12-05 14:09:03.652120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.036 [2024-12-05 14:09:03.652127] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.036 [2024-12-05 14:09:03.652132] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.036 [2024-12-05 14:09:03.662204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.036 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.672061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.036 [2024-12-05 14:09:03.672101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.036 [2024-12-05 14:09:03.672116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.036 [2024-12-05 14:09:03.672122] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.036 [2024-12-05 14:09:03.672128] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.036 [2024-12-05 14:09:03.682360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.036 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.692145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.036 [2024-12-05 14:09:03.692188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.036 [2024-12-05 14:09:03.692203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.036 [2024-12-05 14:09:03.692209] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.036 [2024-12-05 14:09:03.692214] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.036 [2024-12-05 14:09:03.702460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.036 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.712170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.036 [2024-12-05 14:09:03.712209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.036 [2024-12-05 14:09:03.712224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.036 [2024-12-05 14:09:03.712230] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.036 [2024-12-05 14:09:03.712235] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.036 [2024-12-05 14:09:03.722547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.036 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.732170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.036 [2024-12-05 14:09:03.732205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.036 [2024-12-05 14:09:03.732223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.036 [2024-12-05 14:09:03.732230] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.036 [2024-12-05 14:09:03.732235] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.036 [2024-12-05 14:09:03.742588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.036 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.752426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.036 [2024-12-05 14:09:03.752469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.036 [2024-12-05 14:09:03.752484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.036 [2024-12-05 14:09:03.752490] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.036 [2024-12-05 14:09:03.752495] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.036 [2024-12-05 14:09:03.762514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.036 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.772337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.036 [2024-12-05 14:09:03.772380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.036 [2024-12-05 14:09:03.772394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.036 [2024-12-05 14:09:03.772400] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.036 [2024-12-05 14:09:03.772406] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.036 [2024-12-05 14:09:03.782666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.036 qpair failed and we were unable to recover it. 00:38:04.036 [2024-12-05 14:09:03.792407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.037 [2024-12-05 14:09:03.792445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.037 [2024-12-05 14:09:03.792460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.037 [2024-12-05 14:09:03.792466] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.037 [2024-12-05 14:09:03.792472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.037 [2024-12-05 14:09:03.802755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.037 qpair failed and we were unable to recover it. 00:38:04.037 [2024-12-05 14:09:03.812401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.037 [2024-12-05 14:09:03.812437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.037 [2024-12-05 14:09:03.812452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.037 [2024-12-05 14:09:03.812458] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.037 [2024-12-05 14:09:03.812466] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.037 [2024-12-05 14:09:03.822889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.037 qpair failed and we were unable to recover it. 00:38:04.037 [2024-12-05 14:09:03.832592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.037 [2024-12-05 14:09:03.832628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.037 [2024-12-05 14:09:03.832644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.037 [2024-12-05 14:09:03.832650] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.037 [2024-12-05 14:09:03.832655] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.037 [2024-12-05 14:09:03.842750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.037 qpair failed and we were unable to recover it. 00:38:04.037 [2024-12-05 14:09:03.852559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.037 [2024-12-05 14:09:03.852598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.037 [2024-12-05 14:09:03.852613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.037 [2024-12-05 14:09:03.852619] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.037 [2024-12-05 14:09:03.852625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.037 [2024-12-05 14:09:03.862906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.037 qpair failed and we were unable to recover it. 00:38:04.037 [2024-12-05 14:09:03.872727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.037 [2024-12-05 14:09:03.872763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.037 [2024-12-05 14:09:03.872778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.037 [2024-12-05 14:09:03.872784] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.037 [2024-12-05 14:09:03.872789] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.037 [2024-12-05 14:09:03.882895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.037 qpair failed and we were unable to recover it. 00:38:04.296 [2024-12-05 14:09:03.892775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.296 [2024-12-05 14:09:03.892816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.296 [2024-12-05 14:09:03.892832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.296 [2024-12-05 14:09:03.892838] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.296 [2024-12-05 14:09:03.892843] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.296 [2024-12-05 14:09:03.902979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.296 qpair failed and we were unable to recover it. 00:38:04.296 [2024-12-05 14:09:03.912715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.296 [2024-12-05 14:09:03.912760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.296 [2024-12-05 14:09:03.912775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.296 [2024-12-05 14:09:03.912782] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.296 [2024-12-05 14:09:03.912787] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.296 [2024-12-05 14:09:03.923027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.296 qpair failed and we were unable to recover it. 00:38:04.296 [2024-12-05 14:09:03.932873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.296 [2024-12-05 14:09:03.932907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.296 [2024-12-05 14:09:03.932922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.296 [2024-12-05 14:09:03.932928] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.296 [2024-12-05 14:09:03.932933] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.296 [2024-12-05 14:09:03.943182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.296 qpair failed and we were unable to recover it. 00:38:04.296 [2024-12-05 14:09:03.952913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.296 [2024-12-05 14:09:03.952945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.296 [2024-12-05 14:09:03.952960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.296 [2024-12-05 14:09:03.952966] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.296 [2024-12-05 14:09:03.952971] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.296 [2024-12-05 14:09:03.963069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.296 qpair failed and we were unable to recover it. 00:38:04.296 [2024-12-05 14:09:03.972986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.296 [2024-12-05 14:09:03.973024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.296 [2024-12-05 14:09:03.973039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.296 [2024-12-05 14:09:03.973045] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.296 [2024-12-05 14:09:03.973051] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.296 [2024-12-05 14:09:03.983418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.296 qpair failed and we were unable to recover it. 00:38:04.296 [2024-12-05 14:09:03.993136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.296 [2024-12-05 14:09:03.993181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.296 [2024-12-05 14:09:03.993196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.296 [2024-12-05 14:09:03.993202] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.296 [2024-12-05 14:09:03.993207] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.296 [2024-12-05 14:09:04.003254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.296 qpair failed and we were unable to recover it. 00:38:04.296 [2024-12-05 14:09:04.013084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.296 [2024-12-05 14:09:04.013123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.296 [2024-12-05 14:09:04.013138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.296 [2024-12-05 14:09:04.013144] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.296 [2024-12-05 14:09:04.013150] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.296 [2024-12-05 14:09:04.023491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.297 qpair failed and we were unable to recover it. 00:38:04.297 [2024-12-05 14:09:04.033192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.297 [2024-12-05 14:09:04.033223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.297 [2024-12-05 14:09:04.033238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.297 [2024-12-05 14:09:04.033245] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.297 [2024-12-05 14:09:04.033250] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.297 [2024-12-05 14:09:04.043425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.297 qpair failed and we were unable to recover it. 00:38:04.297 [2024-12-05 14:09:04.053198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.297 [2024-12-05 14:09:04.053237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.297 [2024-12-05 14:09:04.053252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.297 [2024-12-05 14:09:04.053258] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.297 [2024-12-05 14:09:04.053263] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.297 [2024-12-05 14:09:04.063570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.297 qpair failed and we were unable to recover it. 00:38:04.297 [2024-12-05 14:09:04.073372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.297 [2024-12-05 14:09:04.073418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.297 [2024-12-05 14:09:04.073437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.297 [2024-12-05 14:09:04.073444] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.297 [2024-12-05 14:09:04.073449] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.297 [2024-12-05 14:09:04.083570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.297 qpair failed and we were unable to recover it. 00:38:04.297 [2024-12-05 14:09:04.093447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.297 [2024-12-05 14:09:04.093485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.297 [2024-12-05 14:09:04.093501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.297 [2024-12-05 14:09:04.093508] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.297 [2024-12-05 14:09:04.093513] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.297 [2024-12-05 14:09:04.103749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.297 qpair failed and we were unable to recover it. 00:38:04.297 [2024-12-05 14:09:04.113204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.297 [2024-12-05 14:09:04.113238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.297 [2024-12-05 14:09:04.113253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.297 [2024-12-05 14:09:04.113260] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.297 [2024-12-05 14:09:04.113265] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.297 [2024-12-05 14:09:04.123743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.297 qpair failed and we were unable to recover it. 00:38:04.297 [2024-12-05 14:09:04.133479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.297 [2024-12-05 14:09:04.133517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.297 [2024-12-05 14:09:04.133532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.297 [2024-12-05 14:09:04.133538] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.297 [2024-12-05 14:09:04.133544] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.297 [2024-12-05 14:09:04.143703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.297 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.153517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.153553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.153568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.153575] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.153584] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.163715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.173608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.173649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.173664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.173671] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.173676] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.183905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.193702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.193739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.193754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.193760] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.193766] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.203771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.213706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.213742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.213756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.213763] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.213768] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.224082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.233770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.233808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.233823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.233829] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.233835] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.244039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.253910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.253948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.253964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.253971] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.253976] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.264025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.273828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.273866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.273881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.273887] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.273893] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.284154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.293862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.293908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.293923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.293930] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.293935] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.304275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.313978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.314023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.314038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.314044] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.314049] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.324208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.334137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.334174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.334189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.334195] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.334200] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.344417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.354091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.354131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.354146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.354152] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.354157] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.557 [2024-12-05 14:09:04.364444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.557 qpair failed and we were unable to recover it. 00:38:04.557 [2024-12-05 14:09:04.374216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.557 [2024-12-05 14:09:04.374256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.557 [2024-12-05 14:09:04.374272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.557 [2024-12-05 14:09:04.374278] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.557 [2024-12-05 14:09:04.374284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.558 [2024-12-05 14:09:04.384399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.558 qpair failed and we were unable to recover it. 00:38:04.558 [2024-12-05 14:09:04.394148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.558 [2024-12-05 14:09:04.394182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.558 [2024-12-05 14:09:04.394197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.558 [2024-12-05 14:09:04.394203] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.558 [2024-12-05 14:09:04.394209] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.558 [2024-12-05 14:09:04.404556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.558 qpair failed and we were unable to recover it. 00:38:04.817 [2024-12-05 14:09:04.414180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.817 [2024-12-05 14:09:04.414217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.817 [2024-12-05 14:09:04.414232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.817 [2024-12-05 14:09:04.414241] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.817 [2024-12-05 14:09:04.414247] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.817 [2024-12-05 14:09:04.424564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.817 qpair failed and we were unable to recover it. 00:38:04.817 [2024-12-05 14:09:04.434203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.817 [2024-12-05 14:09:04.434239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.817 [2024-12-05 14:09:04.434255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.817 [2024-12-05 14:09:04.434261] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.817 [2024-12-05 14:09:04.434267] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.817 [2024-12-05 14:09:04.444537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.817 qpair failed and we were unable to recover it. 00:38:04.817 [2024-12-05 14:09:04.454356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.817 [2024-12-05 14:09:04.454398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.817 [2024-12-05 14:09:04.454413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.817 [2024-12-05 14:09:04.454419] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.817 [2024-12-05 14:09:04.454425] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.817 [2024-12-05 14:09:04.464667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.817 qpair failed and we were unable to recover it. 00:38:04.817 [2024-12-05 14:09:04.474337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.817 [2024-12-05 14:09:04.474384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.817 [2024-12-05 14:09:04.474399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.817 [2024-12-05 14:09:04.474406] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.817 [2024-12-05 14:09:04.474411] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.817 [2024-12-05 14:09:04.484667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.817 qpair failed and we were unable to recover it. 00:38:04.817 [2024-12-05 14:09:04.494465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.817 [2024-12-05 14:09:04.494498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.817 [2024-12-05 14:09:04.494514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.817 [2024-12-05 14:09:04.494520] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.817 [2024-12-05 14:09:04.494528] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.817 [2024-12-05 14:09:04.504829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.817 qpair failed and we were unable to recover it. 00:38:04.817 [2024-12-05 14:09:04.514549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.817 [2024-12-05 14:09:04.514586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.817 [2024-12-05 14:09:04.514601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.817 [2024-12-05 14:09:04.514607] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.817 [2024-12-05 14:09:04.514612] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.817 [2024-12-05 14:09:04.524788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.817 qpair failed and we were unable to recover it. 00:38:04.817 [2024-12-05 14:09:04.534625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.817 [2024-12-05 14:09:04.534661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.817 [2024-12-05 14:09:04.534677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.818 [2024-12-05 14:09:04.534684] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.818 [2024-12-05 14:09:04.534689] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.818 [2024-12-05 14:09:04.544887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.818 qpair failed and we were unable to recover it. 00:38:04.818 [2024-12-05 14:09:04.554625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.818 [2024-12-05 14:09:04.554668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.818 [2024-12-05 14:09:04.554683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.818 [2024-12-05 14:09:04.554689] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.818 [2024-12-05 14:09:04.554695] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.818 [2024-12-05 14:09:04.564867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.818 qpair failed and we were unable to recover it. 00:38:04.818 [2024-12-05 14:09:04.574706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.818 [2024-12-05 14:09:04.574747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.818 [2024-12-05 14:09:04.574762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.818 [2024-12-05 14:09:04.574768] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.818 [2024-12-05 14:09:04.574774] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.818 [2024-12-05 14:09:04.584986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.818 qpair failed and we were unable to recover it. 00:38:04.818 [2024-12-05 14:09:04.594736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.818 [2024-12-05 14:09:04.594774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.818 [2024-12-05 14:09:04.594790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.818 [2024-12-05 14:09:04.594796] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.818 [2024-12-05 14:09:04.594801] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.818 [2024-12-05 14:09:04.605141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.818 qpair failed and we were unable to recover it. 00:38:04.818 [2024-12-05 14:09:04.614751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.818 [2024-12-05 14:09:04.614789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.818 [2024-12-05 14:09:04.614804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.818 [2024-12-05 14:09:04.614810] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.818 [2024-12-05 14:09:04.614815] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.818 [2024-12-05 14:09:04.625149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.818 qpair failed and we were unable to recover it. 00:38:04.818 [2024-12-05 14:09:04.634883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.818 [2024-12-05 14:09:04.634921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.818 [2024-12-05 14:09:04.634936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.818 [2024-12-05 14:09:04.634943] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.818 [2024-12-05 14:09:04.634948] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.818 [2024-12-05 14:09:04.645203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.818 qpair failed and we were unable to recover it. 00:38:04.818 [2024-12-05 14:09:04.654945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.818 [2024-12-05 14:09:04.654978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.818 [2024-12-05 14:09:04.654993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.818 [2024-12-05 14:09:04.655000] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.818 [2024-12-05 14:09:04.655006] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:04.818 [2024-12-05 14:09:04.665245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:04.818 qpair failed and we were unable to recover it. 00:38:05.077 [2024-12-05 14:09:04.675075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.077 [2024-12-05 14:09:04.675110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.077 [2024-12-05 14:09:04.675129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.675135] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.675141] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.685282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.695080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.695119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.695133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.695139] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.695145] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.705257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.715218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.715260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.715274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.715281] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.715286] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.725463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.735361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.735399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.735414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.735421] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.735426] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.745488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.755230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.755265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.755280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.755298] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.755303] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.765543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.775383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.775423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.775438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.775445] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.775450] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.785566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.795475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.795516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.795531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.795537] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.795543] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.805724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.815415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.815455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.815470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.815477] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.815483] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.825754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.835552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.835586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.835601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.835607] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.835613] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.845790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.855477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.855514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.855530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.855536] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.855542] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.865676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.875606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.875645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.875659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.875665] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.875671] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.885853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.895691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.895732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.895747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.078 [2024-12-05 14:09:04.895753] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.078 [2024-12-05 14:09:04.895758] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.078 [2024-12-05 14:09:04.905861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.078 qpair failed and we were unable to recover it. 00:38:05.078 [2024-12-05 14:09:04.915793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.078 [2024-12-05 14:09:04.915824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.078 [2024-12-05 14:09:04.915839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.079 [2024-12-05 14:09:04.915845] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.079 [2024-12-05 14:09:04.915851] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.079 [2024-12-05 14:09:04.926086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.079 qpair failed and we were unable to recover it. 00:38:05.338 [2024-12-05 14:09:04.935874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.338 [2024-12-05 14:09:04.935913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.338 [2024-12-05 14:09:04.935929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.338 [2024-12-05 14:09:04.935935] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.338 [2024-12-05 14:09:04.935941] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.338 [2024-12-05 14:09:04.946131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.338 qpair failed and we were unable to recover it. 00:38:05.338 [2024-12-05 14:09:04.955876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.338 [2024-12-05 14:09:04.955921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.338 [2024-12-05 14:09:04.955935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.338 [2024-12-05 14:09:04.955941] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.338 [2024-12-05 14:09:04.955947] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.338 [2024-12-05 14:09:04.966204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.338 qpair failed and we were unable to recover it. 00:38:05.338 [2024-12-05 14:09:04.975988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.338 [2024-12-05 14:09:04.976027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.338 [2024-12-05 14:09:04.976042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.338 [2024-12-05 14:09:04.976048] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.338 [2024-12-05 14:09:04.976053] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:04.986083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:04.995968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:04.996009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:04.996025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:04.996031] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:04.996036] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.006276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.016058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.016096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.016114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.016120] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.016126] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.026243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.036063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.036105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.036120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.036127] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.036132] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.046315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.056157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.056197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.056213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.056219] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.056224] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.066416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.076305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.076337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.076351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.076357] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.076362] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.086507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.096280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.096317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.096332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.096341] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.096346] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.106582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.116253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.116289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.116304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.116310] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.116316] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.126623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.136351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.136393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.136409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.136415] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.136421] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.146581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.156408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.156442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.156457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.156463] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.156469] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.166752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.339 [2024-12-05 14:09:05.176542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.339 [2024-12-05 14:09:05.176577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.339 [2024-12-05 14:09:05.176592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.339 [2024-12-05 14:09:05.176598] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.339 [2024-12-05 14:09:05.176603] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.339 [2024-12-05 14:09:05.186831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.339 qpair failed and we were unable to recover it. 00:38:05.598 [2024-12-05 14:09:05.196606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.598 [2024-12-05 14:09:05.196647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.598 [2024-12-05 14:09:05.196662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.598 [2024-12-05 14:09:05.196668] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.598 [2024-12-05 14:09:05.196673] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.598 [2024-12-05 14:09:05.206791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.598 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.216657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.216696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.216710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.216716] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.216722] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.226786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.236635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.236671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.236686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.236692] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.236698] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.247105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.256740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.256779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.256795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.256801] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.256807] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.266973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.276792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.276830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.276845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.276852] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.276857] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.286969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.296786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.296827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.296842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.296848] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.296855] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.307082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.316881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.316920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.316936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.316942] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.316948] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.327166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.336890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.336930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.336945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.336952] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.336957] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.347313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.357027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.357062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.357079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.357086] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.357091] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.367141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.376995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.377035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.377050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.377056] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.377062] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.387240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.397178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.397218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.397233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.397239] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.397245] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.407455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.417163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.417199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.417214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.417221] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.417226] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.427492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.599 [2024-12-05 14:09:05.437222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.599 [2024-12-05 14:09:05.437265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.599 [2024-12-05 14:09:05.437280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.599 [2024-12-05 14:09:05.437286] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.599 [2024-12-05 14:09:05.437297] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.599 [2024-12-05 14:09:05.447577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.599 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.457259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.457300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.457315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.457321] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.457326] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.467632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.477312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.477354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.477369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.477379] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.477385] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.487574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.497354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.497396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.497411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.497418] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.497423] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.507617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.517423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.517465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.517480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.517486] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.517491] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.527781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.537475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.537510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.537525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.537531] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.537537] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.547833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.557608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.557648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.557663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.557669] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.557674] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.567788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.577584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.577621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.577635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.577641] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.577646] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.587696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.597744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.597785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.597800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.597806] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.597811] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.607920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.617782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.617820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.858 [2024-12-05 14:09:05.617835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.858 [2024-12-05 14:09:05.617841] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.858 [2024-12-05 14:09:05.617846] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.858 [2024-12-05 14:09:05.627965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-05 14:09:05.637936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.858 [2024-12-05 14:09:05.637972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.859 [2024-12-05 14:09:05.637988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.859 [2024-12-05 14:09:05.637995] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.859 [2024-12-05 14:09:05.638000] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.859 [2024-12-05 14:09:05.648048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-05 14:09:05.657958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.859 [2024-12-05 14:09:05.657995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.859 [2024-12-05 14:09:05.658009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.859 [2024-12-05 14:09:05.658016] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.859 [2024-12-05 14:09:05.658021] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.859 [2024-12-05 14:09:05.668157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-05 14:09:05.677945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.859 [2024-12-05 14:09:05.677983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.859 [2024-12-05 14:09:05.677998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.859 [2024-12-05 14:09:05.678004] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.859 [2024-12-05 14:09:05.678009] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.859 [2024-12-05 14:09:05.688264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-05 14:09:05.697997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.859 [2024-12-05 14:09:05.698036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.859 [2024-12-05 14:09:05.698055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.859 [2024-12-05 14:09:05.698061] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.859 [2024-12-05 14:09:05.698066] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:05.859 [2024-12-05 14:09:05.708127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.859 qpair failed and we were unable to recover it. 00:38:06.116 [2024-12-05 14:09:05.717983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.116 [2024-12-05 14:09:05.718017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.116 [2024-12-05 14:09:05.718032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.116 [2024-12-05 14:09:05.718038] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.116 [2024-12-05 14:09:05.718044] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.116 [2024-12-05 14:09:05.728403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.116 qpair failed and we were unable to recover it. 00:38:06.116 [2024-12-05 14:09:05.738076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.116 [2024-12-05 14:09:05.738112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.116 [2024-12-05 14:09:05.738127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.116 [2024-12-05 14:09:05.738133] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.116 [2024-12-05 14:09:05.738138] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.116 [2024-12-05 14:09:05.748356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.116 qpair failed and we were unable to recover it. 00:38:06.116 [2024-12-05 14:09:05.758209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.116 [2024-12-05 14:09:05.758247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.116 [2024-12-05 14:09:05.758262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.116 [2024-12-05 14:09:05.758268] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.116 [2024-12-05 14:09:05.758273] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.116 [2024-12-05 14:09:05.768427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.116 qpair failed and we were unable to recover it. 00:38:06.116 [2024-12-05 14:09:05.778113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.116 [2024-12-05 14:09:05.778152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.116 [2024-12-05 14:09:05.778167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.116 [2024-12-05 14:09:05.778173] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.116 [2024-12-05 14:09:05.778182] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.116 [2024-12-05 14:09:05.788515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.116 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.798262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.798294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.798309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.798316] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.798321] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.808381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.818461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.818500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.818515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.818521] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.818526] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.828600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.838337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.838373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.838393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.838399] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.838405] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.848705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.858380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.858419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.858434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.858440] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.858445] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.868695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.878433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.878471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.878486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.878492] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.878498] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.888746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.898547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.898583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.898598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.898604] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.898609] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.908775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.918612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.918658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.918672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.918678] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.918684] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.928874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.938604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.938637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.938651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.938657] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.938663] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.948963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.117 [2024-12-05 14:09:05.958730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.117 [2024-12-05 14:09:05.958775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.117 [2024-12-05 14:09:05.958790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.117 [2024-12-05 14:09:05.958796] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.117 [2024-12-05 14:09:05.958801] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.117 [2024-12-05 14:09:05.969034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.117 qpair failed and we were unable to recover it. 00:38:06.376 [2024-12-05 14:09:05.978796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.376 [2024-12-05 14:09:05.978834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.376 [2024-12-05 14:09:05.978849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.376 [2024-12-05 14:09:05.978855] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.376 [2024-12-05 14:09:05.978861] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.376 [2024-12-05 14:09:05.989035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.376 qpair failed and we were unable to recover it. 00:38:06.376 [2024-12-05 14:09:05.998953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.376 [2024-12-05 14:09:05.998991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.376 [2024-12-05 14:09:05.999007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.376 [2024-12-05 14:09:05.999013] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.376 [2024-12-05 14:09:05.999019] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.376 [2024-12-05 14:09:06.009088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.376 qpair failed and we were unable to recover it. 00:38:06.376 [2024-12-05 14:09:06.018944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.376 [2024-12-05 14:09:06.018981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.376 [2024-12-05 14:09:06.018996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.376 [2024-12-05 14:09:06.019003] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.376 [2024-12-05 14:09:06.019008] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.376 [2024-12-05 14:09:06.029225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.376 qpair failed and we were unable to recover it. 00:38:06.376 [2024-12-05 14:09:06.038937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.376 [2024-12-05 14:09:06.038977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.376 [2024-12-05 14:09:06.038995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.376 [2024-12-05 14:09:06.039001] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.376 [2024-12-05 14:09:06.039007] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.376 [2024-12-05 14:09:06.049119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.376 qpair failed and we were unable to recover it. 00:38:06.376 [2024-12-05 14:09:06.058990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.376 [2024-12-05 14:09:06.059030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.376 [2024-12-05 14:09:06.059045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.376 [2024-12-05 14:09:06.059051] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.376 [2024-12-05 14:09:06.059057] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.376 [2024-12-05 14:09:06.069351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.376 qpair failed and we were unable to recover it. 00:38:06.376 [2024-12-05 14:09:06.079140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.376 [2024-12-05 14:09:06.079175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.376 [2024-12-05 14:09:06.079190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.376 [2024-12-05 14:09:06.079197] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.376 [2024-12-05 14:09:06.079203] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.376 [2024-12-05 14:09:06.089359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.376 qpair failed and we were unable to recover it. 00:38:06.377 [2024-12-05 14:09:06.099147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.377 [2024-12-05 14:09:06.099185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.377 [2024-12-05 14:09:06.099201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.377 [2024-12-05 14:09:06.099207] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.377 [2024-12-05 14:09:06.099213] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.377 [2024-12-05 14:09:06.109350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.377 qpair failed and we were unable to recover it. 00:38:06.377 [2024-12-05 14:09:06.119202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.377 [2024-12-05 14:09:06.119233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.377 [2024-12-05 14:09:06.119248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.377 [2024-12-05 14:09:06.119254] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.377 [2024-12-05 14:09:06.119262] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.377 [2024-12-05 14:09:06.129279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.377 qpair failed and we were unable to recover it. 00:38:06.377 [2024-12-05 14:09:06.139275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.377 [2024-12-05 14:09:06.139312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.377 [2024-12-05 14:09:06.139328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.377 [2024-12-05 14:09:06.139334] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.377 [2024-12-05 14:09:06.139339] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.377 [2024-12-05 14:09:06.149610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.377 qpair failed and we were unable to recover it. 00:38:06.377 [2024-12-05 14:09:06.159355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.377 [2024-12-05 14:09:06.159405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.377 [2024-12-05 14:09:06.159421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.377 [2024-12-05 14:09:06.159428] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.377 [2024-12-05 14:09:06.159434] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.377 [2024-12-05 14:09:06.169672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.377 qpair failed and we were unable to recover it. 00:38:06.377 [2024-12-05 14:09:06.179379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.377 [2024-12-05 14:09:06.179419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.377 [2024-12-05 14:09:06.179433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.377 [2024-12-05 14:09:06.179440] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.377 [2024-12-05 14:09:06.179445] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.377 [2024-12-05 14:09:06.189751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.377 qpair failed and we were unable to recover it. 00:38:06.377 [2024-12-05 14:09:06.199442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.377 [2024-12-05 14:09:06.199481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.377 [2024-12-05 14:09:06.199496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.377 [2024-12-05 14:09:06.199502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.377 [2024-12-05 14:09:06.199508] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.377 [2024-12-05 14:09:06.209615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.377 qpair failed and we were unable to recover it. 00:38:06.377 [2024-12-05 14:09:06.219521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.377 [2024-12-05 14:09:06.219557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.377 [2024-12-05 14:09:06.219572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.377 [2024-12-05 14:09:06.219579] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.377 [2024-12-05 14:09:06.219585] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.637 [2024-12-05 14:09:06.229804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.637 qpair failed and we were unable to recover it. 00:38:06.637 [2024-12-05 14:09:06.239513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.637 [2024-12-05 14:09:06.239549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.637 [2024-12-05 14:09:06.239565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.637 [2024-12-05 14:09:06.239571] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.637 [2024-12-05 14:09:06.239577] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.637 [2024-12-05 14:09:06.249879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.637 qpair failed and we were unable to recover it. 00:38:06.637 [2024-12-05 14:09:06.259617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.637 [2024-12-05 14:09:06.259657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.637 [2024-12-05 14:09:06.259673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.637 [2024-12-05 14:09:06.259679] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.637 [2024-12-05 14:09:06.259684] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.637 [2024-12-05 14:09:06.270148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.637 qpair failed and we were unable to recover it. 00:38:06.637 [2024-12-05 14:09:06.279569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.637 [2024-12-05 14:09:06.279607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.637 [2024-12-05 14:09:06.279622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.637 [2024-12-05 14:09:06.279629] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.637 [2024-12-05 14:09:06.279634] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.637 [2024-12-05 14:09:06.289910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.637 qpair failed and we were unable to recover it. 00:38:06.637 [2024-12-05 14:09:06.299671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.637 [2024-12-05 14:09:06.299709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.637 [2024-12-05 14:09:06.299728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.637 [2024-12-05 14:09:06.299734] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.637 [2024-12-05 14:09:06.299740] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.637 [2024-12-05 14:09:06.309857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.637 qpair failed and we were unable to recover it. 00:38:06.637 [2024-12-05 14:09:06.319858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.637 [2024-12-05 14:09:06.319900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.637 [2024-12-05 14:09:06.319915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.319921] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.319927] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.638 [2024-12-05 14:09:06.330046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.638 qpair failed and we were unable to recover it. 00:38:06.638 [2024-12-05 14:09:06.339827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.638 [2024-12-05 14:09:06.339866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.638 [2024-12-05 14:09:06.339881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.339888] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.339893] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.638 [2024-12-05 14:09:06.350036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.638 qpair failed and we were unable to recover it. 00:38:06.638 [2024-12-05 14:09:06.359980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.638 [2024-12-05 14:09:06.360017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.638 [2024-12-05 14:09:06.360033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.360039] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.360045] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.638 [2024-12-05 14:09:06.370272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.638 qpair failed and we were unable to recover it. 00:38:06.638 [2024-12-05 14:09:06.379901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.638 [2024-12-05 14:09:06.379940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.638 [2024-12-05 14:09:06.379955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.379965] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.379970] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.638 [2024-12-05 14:09:06.390205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.638 qpair failed and we were unable to recover it. 00:38:06.638 [2024-12-05 14:09:06.399984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.638 [2024-12-05 14:09:06.400022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.638 [2024-12-05 14:09:06.400037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.400043] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.400049] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.638 [2024-12-05 14:09:06.410246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.638 qpair failed and we were unable to recover it. 00:38:06.638 [2024-12-05 14:09:06.420066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.638 [2024-12-05 14:09:06.420100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.638 [2024-12-05 14:09:06.420115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.420121] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.420127] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.638 [2024-12-05 14:09:06.430385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.638 qpair failed and we were unable to recover it. 00:38:06.638 [2024-12-05 14:09:06.440090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.638 [2024-12-05 14:09:06.440126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.638 [2024-12-05 14:09:06.440140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.440146] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.440152] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.638 [2024-12-05 14:09:06.450394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.638 qpair failed and we were unable to recover it. 00:38:06.638 [2024-12-05 14:09:06.460245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.638 [2024-12-05 14:09:06.460281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.638 [2024-12-05 14:09:06.460296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.460302] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.460307] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.638 [2024-12-05 14:09:06.470566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.638 qpair failed and we were unable to recover it. 00:38:06.638 [2024-12-05 14:09:06.480065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.638 [2024-12-05 14:09:06.480103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.638 [2024-12-05 14:09:06.480118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.638 [2024-12-05 14:09:06.480125] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.638 [2024-12-05 14:09:06.480130] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.898 [2024-12-05 14:09:06.490492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.898 qpair failed and we were unable to recover it. 00:38:06.898 [2024-12-05 14:09:06.500244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.898 [2024-12-05 14:09:06.500280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.898 [2024-12-05 14:09:06.500295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.898 [2024-12-05 14:09:06.500301] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.898 [2024-12-05 14:09:06.500306] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.898 [2024-12-05 14:09:06.510524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.898 qpair failed and we were unable to recover it. 00:38:06.898 [2024-12-05 14:09:06.520511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.898 [2024-12-05 14:09:06.520550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.898 [2024-12-05 14:09:06.520565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.898 [2024-12-05 14:09:06.520571] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.898 [2024-12-05 14:09:06.520576] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.898 [2024-12-05 14:09:06.530488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.898 qpair failed and we were unable to recover it. 00:38:06.898 [2024-12-05 14:09:06.540404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.898 [2024-12-05 14:09:06.540446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.898 [2024-12-05 14:09:06.540461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.898 [2024-12-05 14:09:06.540467] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.898 [2024-12-05 14:09:06.540472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:06.898 [2024-12-05 14:09:06.550666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:06.898 qpair failed and we were unable to recover it. 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Write completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Write completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Write completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Write completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Write completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Write completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Write completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Write completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.836 Read completed with error (sct=0, sc=8) 00:38:07.836 starting I/O failed 00:38:07.837 Read completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Read completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Read completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Write completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Read completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Write completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Write completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Read completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Read completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 Read completed with error (sct=0, sc=8) 00:38:07.837 starting I/O failed 00:38:07.837 [2024-12-05 14:09:07.555737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.837 [2024-12-05 14:09:07.555763] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:38:07.837 A controller has encountered a failure and is being reset. 00:38:07.837 [2024-12-05 14:09:07.555872] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:38:07.837 [2024-12-05 14:09:07.585749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:38:07.837 Controller properly reset. 00:38:07.837 Initializing NVMe Controllers 00:38:07.837 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:38:07.837 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:38:07.837 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:07.837 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:07.837 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:07.837 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:07.837 Initialization complete. Launching workers. 00:38:07.837 Starting thread on core 1 00:38:07.837 Starting thread on core 2 00:38:07.837 Starting thread on core 3 00:38:07.837 Starting thread on core 0 00:38:07.837 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:07.837 00:38:07.837 real 0m11.841s 00:38:07.837 user 0m25.251s 00:38:07.837 sys 0m2.218s 00:38:07.837 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.837 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:07.837 ************************************ 00:38:07.837 END TEST nvmf_target_disconnect_tc2 00:38:07.837 ************************************ 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:08.096 ************************************ 00:38:08.096 START TEST nvmf_target_disconnect_tc3 00:38:08.096 ************************************ 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1963186 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:38:08.096 14:09:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:38:10.002 14:09:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1961340 00:38:10.002 14:09:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Write completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.380 starting I/O failed 00:38:11.380 Read completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Read completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Read completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Write completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Write completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Write completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Write completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Read completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Read completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Write completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 Read completed with error (sct=0, sc=8) 00:38:11.381 starting I/O failed 00:38:11.381 [2024-12-05 14:09:10.917241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:38:11.948 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1961340 Killed "${NVMF_APP[@]}" "$@" 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1963922 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1963922 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1963922 ']' 00:38:11.948 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:11.949 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:11.949 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:11.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:11.949 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:11.949 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:12.208 [2024-12-05 14:09:11.802999] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:38:12.208 [2024-12-05 14:09:11.803045] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:12.208 [2024-12-05 14:09:11.879660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:12.208 [2024-12-05 14:09:11.900421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:12.208 [2024-12-05 14:09:11.900460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:12.208 [2024-12-05 14:09:11.900467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:12.208 [2024-12-05 14:09:11.900473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:12.208 [2024-12-05 14:09:11.900478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:12.208 [2024-12-05 14:09:11.901758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:12.208 [2024-12-05 14:09:11.901865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:12.208 [2024-12-05 14:09:11.901948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:12.208 [2024-12-05 14:09:11.901950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Read completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 Write completed with error (sct=0, sc=8) 00:38:12.208 starting I/O failed 00:38:12.208 [2024-12-05 14:09:11.922547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:12.208 [2024-12-05 14:09:11.924094] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:12.208 [2024-12-05 14:09:11.924113] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:12.208 [2024-12-05 14:09:11.924119] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:38:12.208 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.208 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:38:12.208 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:12.208 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:12.209 14:09:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:12.209 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:12.209 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:12.209 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.209 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:12.468 Malloc0 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:12.468 [2024-12-05 14:09:12.098387] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6f0e20/0x6fd600) succeed. 00:38:12.468 [2024-12-05 14:09:12.106955] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6f24b0/0x73eca0) succeed. 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:12.468 [2024-12-05 14:09:12.243114] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.468 14:09:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1963186 00:38:13.405 [2024-12-05 14:09:12.928091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:13.405 qpair failed and we were unable to recover it. 00:38:13.406 [2024-12-05 14:09:12.929567] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:13.406 [2024-12-05 14:09:12.929585] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:13.406 [2024-12-05 14:09:12.929591] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:38:14.342 [2024-12-05 14:09:13.933367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:14.342 qpair failed and we were unable to recover it. 00:38:14.342 [2024-12-05 14:09:13.934695] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:14.342 [2024-12-05 14:09:13.934709] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:14.342 [2024-12-05 14:09:13.934715] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:38:15.278 [2024-12-05 14:09:14.938586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:15.278 qpair failed and we were unable to recover it. 00:38:15.278 [2024-12-05 14:09:14.939868] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:15.278 [2024-12-05 14:09:14.939882] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:15.278 [2024-12-05 14:09:14.939888] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:38:16.212 [2024-12-05 14:09:15.943520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.212 qpair failed and we were unable to recover it. 00:38:16.212 [2024-12-05 14:09:15.944822] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:16.212 [2024-12-05 14:09:15.944836] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:16.212 [2024-12-05 14:09:15.944841] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:38:17.149 [2024-12-05 14:09:16.948615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.149 qpair failed and we were unable to recover it. 00:38:17.149 [2024-12-05 14:09:16.949913] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:17.149 [2024-12-05 14:09:16.949927] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:17.149 [2024-12-05 14:09:16.949933] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:38:18.529 [2024-12-05 14:09:17.953621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-05 14:09:17.954952] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:18.529 [2024-12-05 14:09:17.954966] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:18.529 [2024-12-05 14:09:17.954971] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:38:19.465 [2024-12-05 14:09:18.958665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:19.465 qpair failed and we were unable to recover it. 00:38:19.465 [2024-12-05 14:09:18.959942] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:19.465 [2024-12-05 14:09:18.959957] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:19.465 [2024-12-05 14:09:18.959962] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:38:20.399 [2024-12-05 14:09:19.963765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:38:20.399 qpair failed and we were unable to recover it. 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Read completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 Write completed with error (sct=0, sc=8) 00:38:21.332 starting I/O failed 00:38:21.332 [2024-12-05 14:09:20.968834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Read completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Read completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Read completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Read completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Read completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Write completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.269 Read completed with error (sct=0, sc=8) 00:38:22.269 starting I/O failed 00:38:22.270 Write completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Write completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Write completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Write completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Read completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 Write completed with error (sct=0, sc=8) 00:38:22.270 starting I/O failed 00:38:22.270 [2024-12-05 14:09:21.973873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:38:22.270 [2024-12-05 14:09:21.975366] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:22.270 [2024-12-05 14:09:21.975386] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:22.270 [2024-12-05 14:09:21.975392] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:23.207 [2024-12-05 14:09:22.979122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:38:23.207 qpair failed and we were unable to recover it. 00:38:23.207 [2024-12-05 14:09:22.980662] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:23.207 [2024-12-05 14:09:22.980678] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:23.207 [2024-12-05 14:09:22.980684] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:38:24.142 [2024-12-05 14:09:23.984475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:38:24.142 qpair failed and we were unable to recover it. 00:38:24.142 [2024-12-05 14:09:23.986357] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:24.142 [2024-12-05 14:09:23.986419] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:24.142 [2024-12-05 14:09:23.986441] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cef80 00:38:25.516 [2024-12-05 14:09:24.990246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.516 qpair failed and we were unable to recover it. 00:38:25.516 [2024-12-05 14:09:24.991666] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:25.516 [2024-12-05 14:09:24.991680] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:25.516 [2024-12-05 14:09:24.991686] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cef80 00:38:26.468 [2024-12-05 14:09:25.995520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-12-05 14:09:25.995678] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:38:26.468 A controller has encountered a failure and is being reset. 00:38:26.468 Resorting to new failover address 192.168.100.9 00:38:26.468 [2024-12-05 14:09:25.997651] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:26.468 [2024-12-05 14:09:25.997702] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:26.468 [2024-12-05 14:09:25.997723] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:38:27.161 [2024-12-05 14:09:27.001562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:38:27.161 qpair failed and we were unable to recover it. 00:38:27.161 [2024-12-05 14:09:27.003063] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:38:27.161 [2024-12-05 14:09:27.003078] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:38:27.161 [2024-12-05 14:09:27.003084] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:38:28.534 [2024-12-05 14:09:28.006874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:38:28.534 qpair failed and we were unable to recover it. 00:38:28.534 [2024-12-05 14:09:28.007023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:28.534 [2024-12-05 14:09:28.007128] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:38:28.534 [2024-12-05 14:09:28.022662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:38:28.534 Controller properly reset. 00:38:28.534 Initializing NVMe Controllers 00:38:28.534 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:38:28.534 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:38:28.534 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:28.534 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:28.534 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:28.534 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:28.534 Initialization complete. Launching workers. 00:38:28.534 Starting thread on core 1 00:38:28.534 Starting thread on core 2 00:38:28.534 Starting thread on core 3 00:38:28.534 Starting thread on core 0 00:38:28.534 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:38:28.534 00:38:28.534 real 0m20.322s 00:38:28.534 user 1m2.013s 00:38:28.534 sys 0m4.656s 00:38:28.534 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.534 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:38:28.534 ************************************ 00:38:28.534 END TEST nvmf_target_disconnect_tc3 00:38:28.535 ************************************ 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:38:28.535 rmmod nvme_rdma 00:38:28.535 rmmod nvme_fabrics 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1963922 ']' 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1963922 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1963922 ']' 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1963922 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1963922 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1963922' 00:38:28.535 killing process with pid 1963922 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1963922 00:38:28.535 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1963922 00:38:28.793 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:28.793 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:38:28.793 00:38:28.793 real 0m40.127s 00:38:28.793 user 2m35.752s 00:38:28.793 sys 0m12.107s 00:38:28.793 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.793 14:09:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:28.793 ************************************ 00:38:28.793 END TEST nvmf_target_disconnect 00:38:28.793 ************************************ 00:38:28.793 14:09:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:28.793 00:38:28.793 real 7m42.654s 00:38:28.793 user 22m38.404s 00:38:28.793 sys 1m32.630s 00:38:28.793 14:09:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.793 14:09:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.793 ************************************ 00:38:28.793 END TEST nvmf_host 00:38:28.793 ************************************ 00:38:28.793 14:09:28 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:38:28.793 00:38:28.793 real 26m54.833s 00:38:28.793 user 80m15.390s 00:38:28.793 sys 5m46.043s 00:38:28.793 14:09:28 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.793 14:09:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:28.793 ************************************ 00:38:28.793 END TEST nvmf_rdma 00:38:28.793 ************************************ 00:38:28.793 14:09:28 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:38:28.793 14:09:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:28.793 14:09:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:28.793 14:09:28 -- common/autotest_common.sh@10 -- # set +x 00:38:28.794 ************************************ 00:38:28.794 START TEST spdkcli_nvmf_rdma 00:38:28.794 ************************************ 00:38:28.794 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:38:29.053 * Looking for test storage... 00:38:29.053 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:29.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.053 --rc genhtml_branch_coverage=1 00:38:29.053 --rc genhtml_function_coverage=1 00:38:29.053 --rc genhtml_legend=1 00:38:29.053 --rc geninfo_all_blocks=1 00:38:29.053 --rc geninfo_unexecuted_blocks=1 00:38:29.053 00:38:29.053 ' 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:29.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.053 --rc genhtml_branch_coverage=1 00:38:29.053 --rc genhtml_function_coverage=1 00:38:29.053 --rc genhtml_legend=1 00:38:29.053 --rc geninfo_all_blocks=1 00:38:29.053 --rc geninfo_unexecuted_blocks=1 00:38:29.053 00:38:29.053 ' 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:29.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.053 --rc genhtml_branch_coverage=1 00:38:29.053 --rc genhtml_function_coverage=1 00:38:29.053 --rc genhtml_legend=1 00:38:29.053 --rc geninfo_all_blocks=1 00:38:29.053 --rc geninfo_unexecuted_blocks=1 00:38:29.053 00:38:29.053 ' 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:29.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.053 --rc genhtml_branch_coverage=1 00:38:29.053 --rc genhtml_function_coverage=1 00:38:29.053 --rc genhtml_legend=1 00:38:29.053 --rc geninfo_all_blocks=1 00:38:29.053 --rc geninfo_unexecuted_blocks=1 00:38:29.053 00:38:29.053 ' 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:38:29.053 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:29.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1966927 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1966927 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 1966927 ']' 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:29.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:29.054 14:09:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:29.054 [2024-12-05 14:09:28.840459] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:38:29.054 [2024-12-05 14:09:28.840504] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966927 ] 00:38:29.313 [2024-12-05 14:09:28.912602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:29.313 [2024-12-05 14:09:28.935425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.313 [2024-12-05 14:09:28.935428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:38:29.313 14:09:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:38:35.877 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:38:35.877 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:38:35.878 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:38:35.878 Found net devices under 0000:18:00.0: mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:38:35.878 Found net devices under 0000:18:00.1: mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:38:35.878 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:35.878 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:38:35.878 altname enp24s0f0np0 00:38:35.878 altname ens785f0np0 00:38:35.878 inet 192.168.100.8/24 scope global mlx_0_0 00:38:35.878 valid_lft forever preferred_lft forever 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:38:35.878 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:35.878 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:38:35.878 altname enp24s0f1np1 00:38:35.878 altname ens785f1np1 00:38:35.878 inet 192.168.100.9/24 scope global mlx_0_1 00:38:35.878 valid_lft forever preferred_lft forever 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:38:35.878 192.168.100.9' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:38:35.878 192.168.100.9' 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:38:35.878 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:38:35.879 192.168.100.9' 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:35.879 14:09:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:35.879 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:35.879 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:35.879 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:35.879 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:35.879 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:35.879 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:35.879 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:35.879 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:35.879 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:35.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:35.879 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:35.879 ' 00:38:37.782 [2024-12-05 14:09:37.543112] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x963950/0x9723c0) succeed. 00:38:37.782 [2024-12-05 14:09:37.551598] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x965030/0x9f2400) succeed. 00:38:39.157 [2024-12-05 14:09:38.941403] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:38:41.689 [2024-12-05 14:09:41.421108] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:38:44.223 [2024-12-05 14:09:43.571951] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:38:45.599 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:45.599 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:45.599 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:45.599 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:45.599 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:45.599 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:45.599 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:45.599 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:45.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:45.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:45.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:45.600 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:45.600 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:45.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:45.600 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:45.600 14:09:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:45.600 14:09:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:45.600 14:09:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:45.600 14:09:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:45.600 14:09:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:45.600 14:09:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:45.600 14:09:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:38:45.600 14:09:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:46.164 14:09:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:46.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:46.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:46.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:46.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:38:46.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:38:46.165 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:46.165 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:46.165 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:46.165 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:46.165 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:46.165 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:46.165 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:46.165 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:46.165 ' 00:38:52.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:52.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:52.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:52.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:52.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:38:52.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:38:52.725 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:52.725 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:52.725 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:52.725 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:52.725 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:52.725 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:52.725 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:52.725 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1966927 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 1966927 ']' 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 1966927 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1966927 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1966927' 00:38:52.725 killing process with pid 1966927 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 1966927 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 1966927 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:38:52.725 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:38:52.726 rmmod nvme_rdma 00:38:52.726 rmmod nvme_fabrics 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:38:52.726 00:38:52.726 real 0m23.168s 00:38:52.726 user 0m51.322s 00:38:52.726 sys 0m5.089s 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.726 14:09:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:52.726 ************************************ 00:38:52.726 END TEST spdkcli_nvmf_rdma 00:38:52.726 ************************************ 00:38:52.726 14:09:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:52.726 14:09:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:52.726 14:09:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:52.726 14:09:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:52.726 14:09:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:52.726 14:09:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:52.726 14:09:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:52.726 14:09:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:52.726 14:09:51 -- common/autotest_common.sh@10 -- # set +x 00:38:52.726 14:09:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:52.726 14:09:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:52.726 14:09:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:52.726 14:09:51 -- common/autotest_common.sh@10 -- # set +x 00:38:57.996 INFO: APP EXITING 00:38:57.996 INFO: killing all VMs 00:38:57.996 INFO: killing vhost app 00:38:57.996 INFO: EXIT DONE 00:39:00.534 Waiting for block devices as requested 00:39:00.534 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:00.534 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:00.534 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:00.534 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:00.534 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:00.793 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:00.793 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:00.793 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:00.793 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:01.052 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:01.052 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:01.052 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:01.052 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:01.311 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:01.311 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:01.311 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:01.570 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:39:05.756 Cleaning 00:39:05.756 Removing: /var/run/dpdk/spdk0/config 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:05.756 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:05.756 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:05.756 Removing: /var/run/dpdk/spdk1/config 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:05.756 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:05.756 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:05.756 Removing: /var/run/dpdk/spdk1/mp_socket 00:39:05.756 Removing: /var/run/dpdk/spdk2/config 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:05.756 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:05.756 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:05.756 Removing: /var/run/dpdk/spdk3/config 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:05.756 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:06.016 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:06.016 Removing: /var/run/dpdk/spdk4/config 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:06.016 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:06.016 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:06.016 Removing: /dev/shm/bdevperf_trace.pid1591589 00:39:06.016 Removing: /dev/shm/bdev_svc_trace.1 00:39:06.016 Removing: /dev/shm/nvmf_trace.0 00:39:06.016 Removing: /dev/shm/spdk_tgt_trace.pid1546420 00:39:06.016 Removing: /var/run/dpdk/spdk0 00:39:06.016 Removing: /var/run/dpdk/spdk1 00:39:06.016 Removing: /var/run/dpdk/spdk2 00:39:06.016 Removing: /var/run/dpdk/spdk3 00:39:06.016 Removing: /var/run/dpdk/spdk4 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1543122 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1544696 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1546420 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1546876 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1547962 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1548226 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1549272 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1549333 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1549666 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1554893 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1556850 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1557172 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1557499 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1557771 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1557917 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1558199 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1558479 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1558792 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1559626 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1563386 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1563674 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1563960 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1563971 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1564493 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1564529 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1564997 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1565095 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1565390 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1565401 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1565686 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1565694 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1566190 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1566365 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1566692 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1570664 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1575007 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1585650 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1586627 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1591589 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1591864 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1595949 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1601898 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1604827 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1615557 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1640316 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1644175 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1742105 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1747466 00:39:06.016 Removing: /var/run/dpdk/spdk_pid1753682 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1762645 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1794176 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1799588 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1842824 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1843683 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1844702 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1845839 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1850448 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1856782 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1863838 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1864661 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1865683 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1866678 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1867001 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1871444 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1871525 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1875940 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1876468 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1877166 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1878007 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1878161 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1882339 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1884185 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1886262 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1888096 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1890179 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1892029 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1898574 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1899099 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1902560 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1903981 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1913505 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1917011 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1922468 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1932878 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1932892 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1953901 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1954163 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1960317 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1960626 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1963186 00:39:06.277 Removing: /var/run/dpdk/spdk_pid1966927 00:39:06.277 Clean 00:39:06.277 14:10:06 -- common/autotest_common.sh@1453 -- # return 0 00:39:06.277 14:10:06 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:06.277 14:10:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:06.277 14:10:06 -- common/autotest_common.sh@10 -- # set +x 00:39:06.277 14:10:06 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:06.277 14:10:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:06.277 14:10:06 -- common/autotest_common.sh@10 -- # set +x 00:39:06.537 14:10:06 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:39:06.537 14:10:06 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:39:06.537 14:10:06 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:39:06.537 14:10:06 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:06.537 14:10:06 -- spdk/autotest.sh@398 -- # hostname 00:39:06.537 14:10:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-37 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:39:06.537 geninfo: WARNING: invalid characters removed from testname! 00:39:24.631 14:10:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:27.165 14:10:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:28.540 14:10:28 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:29.917 14:10:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:31.818 14:10:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:33.192 14:10:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:39:35.095 14:10:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:35.095 14:10:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:35.095 14:10:34 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:39:35.095 14:10:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:35.095 14:10:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:35.095 14:10:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:39:35.095 + [[ -n 1447855 ]] 00:39:35.095 + sudo kill 1447855 00:39:35.105 [Pipeline] } 00:39:35.122 [Pipeline] // stage 00:39:35.129 [Pipeline] } 00:39:35.145 [Pipeline] // timeout 00:39:35.151 [Pipeline] } 00:39:35.166 [Pipeline] // catchError 00:39:35.172 [Pipeline] } 00:39:35.187 [Pipeline] // wrap 00:39:35.194 [Pipeline] } 00:39:35.206 [Pipeline] // catchError 00:39:35.215 [Pipeline] stage 00:39:35.217 [Pipeline] { (Epilogue) 00:39:35.229 [Pipeline] catchError 00:39:35.230 [Pipeline] { 00:39:35.242 [Pipeline] echo 00:39:35.243 Cleanup processes 00:39:35.248 [Pipeline] sh 00:39:35.533 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:35.533 1983645 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:35.545 [Pipeline] sh 00:39:35.831 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:39:35.831 ++ grep -v 'sudo pgrep' 00:39:35.831 ++ awk '{print $1}' 00:39:35.831 + sudo kill -9 00:39:35.831 + true 00:39:35.841 [Pipeline] sh 00:39:36.226 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:46.224 [Pipeline] sh 00:39:46.510 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:46.510 Artifacts sizes are good 00:39:46.523 [Pipeline] archiveArtifacts 00:39:46.530 Archiving artifacts 00:39:46.675 [Pipeline] sh 00:39:46.960 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:39:46.974 [Pipeline] cleanWs 00:39:46.984 [WS-CLEANUP] Deleting project workspace... 00:39:46.984 [WS-CLEANUP] Deferred wipeout is used... 00:39:46.991 [WS-CLEANUP] done 00:39:46.993 [Pipeline] } 00:39:47.036 [Pipeline] // catchError 00:39:47.047 [Pipeline] sh 00:39:47.330 + logger -p user.info -t JENKINS-CI 00:39:47.339 [Pipeline] } 00:39:47.352 [Pipeline] // stage 00:39:47.358 [Pipeline] } 00:39:47.372 [Pipeline] // node 00:39:47.377 [Pipeline] End of Pipeline 00:39:47.414 Finished: SUCCESS